Listen to this article now
What better way to create a movie trailer about an artificially enhanced human than to use the reality behind the premise; artificial intelligence (AI). That’s just what a partnership between IBM Research and 20th Century Fox recently set out to do, when they used machine learning techniques to produce what they described as the “first ever cognitive movie trailer.” You’ll have to judge the merits of the result yourself, but what is beyond doubt is this is just one example of the many ways AI and machine learning techniques are already changing the face of the entertainment industry.
It’s only makes sense that creative industries are leading the pack when it comes to the adoption of and experimentation with AI. Media, entertainment, and advertising are all the on the cutting edge when it comes to the adoption of AI and machine learning. While these industries are out in front, we are also seeing AI make inroads in other industries that stand to gain from this technology—like business services, healthcare, finance, agriculture, education, and manufacturing, to name just a few. The revenue forecast for enterprise AI changes depending on the source, but it’s estimated to be at about $300-$350 million in 2016, and predicted to reach upwards of $30+ billion by 2025 The technologies included in this focus are cognitive computing, natural language processing, image recognition, speech recognition, predictive APIs, deep learning, and machine learning.
Back to the entertainment industry. Let’s look at some of the ways machine learning is changing the entertainment and advertising industries, and in the process, give you the opportunity to critique that AI movie trailer.
Creating an AI Trailer
The process of creating a trailer for new horror movie “Morgan” involved using machine learning techniques and experimental APIs through IBM’s Watson platform. Watson was taken to film school as it analyzed hundreds of existing horror movie trailers to learn what kept viewers on edge before being fed the entire final cut of the upcoming movie. The analysis resulted in the program selecting the 10 most usable moments in the film and then a human editor created this finished trailer from those clips. Pretty cool, isn’t it?
The entire process took about 24 hours to complete, compared to a 10 to 30 day, labor-intensive, manual edit that would be the norm. According to John R. Smith, multimedia and vision manager at IBM, the capacity to reduce a process from weeks to hours reveals the true power of AI.
Smith went on to say, “The combination of machine intelligence and human expertise is a powerful one. This research investigation is simply the first of many into what we hope will be a promising area of machine and human creativity. We don’t have the only solution for this challenge, but we’re excited about pushing the possibilities of how AI can augment the expertise and creativity of individuals.”
Another example of the use of AI in the movies comes with “Impossible Things,” an independent horror movie co-written by AI software. The movie is the brainchild of Jack Zhang whose Greenlight Essential company has developed AI software which, per their website, “allows users with neither programming nor mathematics background to explore and discover repeatable patterns from decades of film data.”
Impossible Things is, according to Zhang’s Kickstarter page the first co-written feature film that marks “the next step in human-computer collaboratively created original content.” The software uses the machine learning process of Natural Language Processing (NLP) to analyze thousands of movie plot summaries correlated to box office performance. The result they say is an AI system that’s smart enough to recognize plot patterns that lead the way to successful box office performance.
For the Impossible Things project, the AI generated the initial premise and essential plot points, which the creative team used as a point of reference to grow their story. At the time of writing, the movie has 161 backers pledging more than $30,000 to the project. It’s clearly early days, but this is one worth watching. Will the finished product achieve the box office success the AI analysis promises?
Machine Learning is also helping entertainment providers recommend personalized content, based on the user’s previous viewing activity and behavior. Take Netflix for example. They run a large number of machine learning workflows every day to be able to predict what we want to watch. The tech team at Netflix has created an AI framework called Meson to support their efforts. And you see the work of Meson every time you open Netflix and are served up suggestions on what to watch next.
“Meson is a general purpose workflow orchestration and scheduling framework that we built to manage ML pipelines that execute workloads across heterogeneous systems. It manages the lifecycle of several ML pipelines that build, train and validate personalization algorithms that drive video recommendations.”
We aren’t talking about simple viewing recommendations though—machine learning applications are also being used to fine tune the way the home page is presented to users, in particular the “Continue Watching” (CW) row. Using data based on factors such as subscription history, previous interactions with content, and even contextual features such as time of day and device, the content and placement of the CW row can be amended for maximum effect.
Although still in its infancy it’s not hard to see the massive potential here. Combined with other recent developments such as Google’s Knowledge Graph and Facebook’s Graph Search—not to mention where Siri and Alexa might take us—the face of search and recommendation in entertainment is going to be very different, very soon.
AI as Creative Director (and Rembrandt)
Advertising is another creative industry where machine learning is beginning to have an impact. The Drum, in partnership with native advertising company Teads has recently produced a short documentary to demonstrate how some in the ad industry are exploring how AI can aid the creative process. The Automation of Creativity features some interesting new concepts.
- McCann Erickson in Japan developed an AI creative director, AI-CD ß that analyzes the client’s brief before producing creative ideas. The machine has even been taken to a meeting where ideas are presented to the client with Shun Matsuzaka, communication planner at McCann Erickson saying, “We want to treat AI-CD ß like a normal creative director. And it’s important for a creative director to be in the meeting.” That is a pitch I would’ve loved to have sat in on.
- A London ad company created a bus shelter ad display incorporating facial recognition and AI techniques. The display, which featured a fictitious coffee brand, presented different images and copy depending on how engaged people were judged to be. The system was designed to “learn” which ads created the most engagement and reaction in an ongoing process of improvement.
Where does Rembrandt come in? The documentary also features another creative experiment where machine learning was used to produce a new portrait based on previous work by the artist. Look at the video to judge the results for yourself.
AI and Creativity in Entertainment
The same question comes up in all of these AI projects – can AI be truly creative?
That’s a hard question to answer right now, because we don’t know what the future holds for AI in creative industries. The clear consensus at present is AI, which is predicated around the analysis of past performance and historical data, is incapable of being truly creative on its own. I agree. Technology can aid in the creative process, but it’s the human factor that brings it all together. What AI can do effectively though is identify patterns and trends that humans may not have unearthed and, as a consequence, set the human creative process off in new directions. At the very least it can speed up labor-intensive tasks to free the creative human mind.
We are only scratching the surface when it comes to harnessing the power of machine learning to enhance the entertainment experience and the next several years are going to be fascinating.
Photo Credit: joeyt1267 Flickr via Compfight cc
Shelly Kramer is a Principal Analyst and Founding Partner at Futurum Research. A serial entrepreneur with a technology centric focus, she has worked alongside some of the world’s largest brands to embrace disruption and spur innovation, understand and address the realities of the connected customer, and help navigate the process of digital transformation. She brings 20 years' experience as a brand strategist to her work at Futurum, and has deep experience helping global companies with marketing challenges, GTM strategies, messaging development, and driving strategy and digital transformation for B2B brands across multiple verticals. Shelly's coverage areas include Collaboration/CX/SaaS, platforms, ESG, and Cybersecurity, as well as topics and trends related to the Future of Work, the transformation of the workplace and how people and technology are driving that transformation. A transplanted New Yorker, she has learned to love life in the Midwest, and has firsthand experience that some of the most innovative minds and most successful companies in the world also happen to live in “flyover country.”