Keeping it Real: Your Brand and Deepfake

Thumbnail

Craig Strydom, Creative Director of Brand

On November 26, 1963, on his way to deliver a speech in Dallas, John F. Kennedy was assassinated. He never got to make his speech that day. Nonetheless, the 2,590-word speech survived. Now, 55 years later, thanks to the Times of London’s «JFK Unsilenced» project, it is finally possible to hear the speech in JFK’s own voice. The technology that makes this possible is known as “deepfake,” a portmanteau of “deep learning” and “fake.” Astoundingly, a database of 116,777 phonetic sound units was assembled from 831 speeches and interviews to create a near-perfect recreation of a speech he never made.

Putting the Real into Surreal

Likewise, 30 years after his death, Salvador Dali came back to life in a video where, talking to camera, he shares his thoughts with visitors at the Dali museum. How was Dali resurrected? Again, deepfake. The process involved pulling content from millions of frames of interviews with Dali and drawing on Artificial Intelligence, to overlay a digital mask of sorts over an actor’s face. This allowed the actor to appear as Dali no matter what expression he made. To recreate the voice, a voice-over artist with the correct dialect was sourced. The words were Dali’s own.

Breakthrough Technologies

Deepfake – fake videos or audio recordings that look and sound just like the real thing – is the latest breakthrough technology to hit the cybersphere. It is not surprising to see new technologies cross over from the lab into the commercial realm. No sooner are new technologies unveiled than they are co-opted by brands and brand-websites for commercial reasons, which ultimately includes brand communications. By way of example, Augmented Reality (AR) was quickly assimilated by brands seeking new ways to make their digital properties more experiential. Streaming is another example. And now it’s the turn of deepfake.

Putting a Mouth to One’s Words

Deepfake is a technique for human image creation based on artificial intelligence. How it works is simple. Deepfake relies on a technology called visual synthesis, whereby, amongst other things, developers can use video to build a 3-D model which is then reanimated to match the mouth movements and expressions of a separate actor, reading from a new script, taking the concept of putting words in one’s mouth to a whole new level.

Technically-speaking – according to cyber security expert and CSO online journalist, J.M. Porup – deepfake involves what is known as “generative adversarial networks,” or GANs, in which two machine learning (ML) models duke it out. One ML model trains on a data set and then creates video forgeries, while the other attempts to detect the forgeries. The forger creates fakes until the other ML model can’t detect the forgery. This is why videos of former presidents and Hollywood celebrities have been frequently used in early, first generation deepfakes – there’s simply a ton of publicly available video footage to train the forger.

A Brave New World

Deepfake as a concept has been around for a long time. Who can forget the propaganda photographs of communist leaders such as Stalin where those who had fallen out of favor were simply air-brushed out of the photograph, and in that manner, out of history? Similarly, books such as Orwell’s “1984” and Huxley’s “Brave New World” foreshadowed technologies like deepfake. In fact, America was nearly brought to a standstill at 8PM on October 30th, 1938, when Orson Welles’ radio production of “War of the Worlds” produced widespread panic among concerned citizens. Presaging deepfake, the broadcast was disguised as a typical evening of radio programming interrupted by a series of news bulletins. In other words, it was made to feel real.

Deepfake for Good(or Bad)?

Might there be ways to use deepfake for good? A recent ‘Malaria No More’ campaign, created by agency Synthesia, certainly makes the case. Utilizing deepfake technology to communicate its message in a novel way, the charity created a 55-second web video featuring soccer star David Beckham seamlessly delivering the charity’s message in nine different languages, with mouth movements and facial expressions as real as any native-speaker from any one of the nine different languages used.

Mostly, however, warnings against deepfake technology run deep. From fake emergency alerts (see Welles above), to the possibility of destroying someone’s life or marriage though fake sex tapes, not to mention the very real possibility of disrupting elections utilizing fake messages. The possibilities are endless. So much so that 2016 presidential candidate Marco Rubio called deepfake “the equivalent of nuclear weapons.”

Your Brand and Deepfake

Brands now have the choice of drawing on deepfake technology for their digital messages in ways that have not yet been thought of, such as creating fake endorsements, assuming the brand has the rights to do so. In theory, nothing stops us from seeing Bruce Lee endorsing Nike. This has the potential to produce the same effect as Natalie Cole’s 1991 fake-live performance with her father Nat King Cole. Or even the memorable on-stage hologram performance of the late rapper Tupac Shakur. 

Dark Patterns and Light

Moving forward, brands will have to decide how far to push the technological envelope on their websites, in communications and user engagement, not only for deepfake, but for all new technologies. In fact, authorities are placing such an importance on deepfake, that DARPA (a Blue Water client), is investing time and money into finding better ways to authenticate video, and to ensure that nefarious usages of the technique don’t see the light of day.

Failure to self-regulate could result in governments taking control. Only this week, Axios reported that Senators Mark Warner and Deb Fischer debuted a measure known as The Deceptive Experiences To Online Users Reduction Act intended to crack down on manipulative design features, known as “dark patterns,” in major web platforms – Google, Facebook, Amazon – that are meant to capture users’ consent or data. Which begs the question, what will we call the measure regulating deepfake, if we can’t find ways to self-regulate?

*

Blue Water is a full-service agency that delivers exceptional online and offline experiences for private and public-sector clients. Leveraging data, we operate at the intersection of experience design, creative, content strategy, technology, and marketing, which we call our Digital Convergence Model(DCM), and work in small, cross-functional teams with clients as our partners and users as our focus. This forms the foundation of our approach to turning digital challenges into endless possibilities. 

Source:bwm.com/blog/keeping-it-real-your-brand-and-deepfake

Опубликовано в рубрике Branding
Добавить комментарий

Ваш e-mail не будет опубликован. Обязательные поля помечены *