Perhaps you’ve seen the videos: Typically cool former President Barack Obama using obscenities to describe President Donald Trump. Jennifer Lawrence answered questions at the Golden Globes with Steve Buscemi’s face. Mark Zuckerberg admitted being in cahoots with Spectre, the evil organization in James Bond films.
All are deepfake videos—videos manipulated to make the subject do or say things that never happened. It’s like Photoshop on steroids.
What is deepfake content?
At first glance, deepfake software—short for deep learning+fake, a technology made possible with deep learning artificial intelligence technology, hence the name—may seem harmless, something people with too much time on their hands might do for a laugh. But in the wrong hands, the possibilities are far more nefarious. Take, for instance, a sex “tape” with a celebrity’s face morphed in. “An illusion can be utilized either to insinuate a person for a wrongdoing or provide a fabricated alibi to argue against it,” said Siwei Lyu, Ph.D., director of the Computer Vision and Machine Learning lab at the University at Albany, State University of New York. “It also adds perceptual support to make fake news more believable.”
Deepfake videos basically fall into three categories, all possessing the potential to create a nightmare scenario for the targeted person being featured.
Face swapping: Just as it sounds, the face of someone in a video is replaced with a different face. Exhibit A: Jennifer Lawrence speaking at the Golden Globes with Steve Buscemi’s face. Or, more maliciously, a celebrity’s face plastered onto pornographic content he or she did not make or participate in.
Lip synching: A desired audio is converted into “mouth points” on a video, and then the details (such as teeth) are filled in. The effect is that the target person is saying something he or she really isn’t. That’s how Mark Zuckerberg looked as if he was boasting about stealing data with the help of Spectre.
Puppeteering: A subject is manipulated to perform movements he or she didn’t actually do. This is how computer scientists were able to make the Mona Lisa speak and laugh (see video below).
Examples of deepfake videos
Barack Obama appears to let loose on what we think he really thinks, when in fact, it’s a deepfake put together by Jordan Peele to show how harmful the technology can be. See it here.
Jennifer Lawrence looks like Steve Buscemi as she’s answering questions at the Golden Globes, and that’s because someone placed his face on her head. See it here.
Mark Zuckerberg appears to relish data stealing by joining forces with an evil organization out of James Bond. The poor quality and obviously fake voice is a dead giveaway that this is a deepfake video. See it here.
Nicolas Cage appears in shows and movies he has never starred in—including Friends and The Wizard of Oz. See it here.
Bill Hader’s impression of Arnold Schwarzenegger is dead on, especially when his face morphs into the Terminator’s himself. See it here.
The Mona Lisa comes to life, using new technology that doesn’t require a range of images, just a single very famous one. See it here.
Tthis example isn’t as sophisticated as a true deepfake video, but it shows how tweaks here and there have the potential to be quite convincing. Here, a video from the White House suggests that CNN reporter Jim Acosta acted aggressively toward a White House intern, but third-party fact-checkers argue that Acosta’s arm movements were altered to make it only appear so. See it here.
How do deepfakes work?
Deepfakes are facial manipulations based on AI, the backbone of which is something called “deep neural networks,” said Lyu. This technology is able to process raw data in a non-linear way in order to see trends, predict patterns, and solve problems. Deep-learning algorithms underlie a whole range of products and services we now use on a day-to-day basis—such as virtual assistants, facial recognition security systems and Netflix recommendations.
In the case of deepfake videos, hundreds of facial images that appear online from various angles with different expressions are used to construct whatever illusion strikes the maker’s fancy. That’s why celebrities and public officials are typically the victims of deepfake videos, though anyone with images online is vulnerable. As technology continues to develop, deepfake software can create more realistic videos with less data.
So-called “human image synthesis” software isn’t hard to come by (just Google it). You’ll discover dedicated free-to-the-consumer (but ad-supported) websites and apps, too, and hobbyists who provide instruction on blogs. As a result, swapping out faces has become relatively easy and affordable. If a person doesn’t have the time, tech-savvy, or patience to create such a video, chances are, you can find a freelancer who’ll do it for less than the price of date night out at the movies. One app maker created—and then quickly pulled, after the online uproar—DeepNude, which would “disrobe” any image of a woman you feed it. (It did not work on male images.)
How to protect yourself from deepfake videos
With the power of the internet, it’s not hard to imagine how a lie generated by a deepfake video can spin out of control, damage the reputation of a politician, or potentially shift the outcome of an election. But, as Hany Farid, Ph.D., professor of computer science at the UC Berkeley School of Information, points out in an interview with NOVA, you don’t need to fool tens of millions of people to make an impact. A well-timed deepfake about a particular company before its initial public offering, for instance, or a video featuring a political candidate before a close election can be just as, or even more damaging, even with fewer shares.
Figuring out whether someone in a video is real is far from easy. No sooner than researchers pointed out that targets in deepfake videos don’t blink as much, if at all, scammers figured out how to remedy that glitch. More sophisticated means—for instance, deciphering a person’s unique correlations between facial expressions and head movements—hold some promise for debunking deepfake videos. In the meantime, take a close look at the facial features:
- The skin may appear softer or more softened than its surroundings.
- Details like the eyes and teeth might also often be blurred.
- The subject doesn’t turn their head much (to reveal their lack of 3-D detail).
While it’s typically political or celebrity deepfakes that go viral, everyday folks aren’t immune to deepfake threats. From revenge-seeking exes to discontented employees, just about anyone with some digital know-how (or a small amount of cash to hire someone with that knowledge) could potentially manufacture a fake video.
Unfortunately, the safest strategies to avoid being victimized aren’t very practical. These include:
- Not having images or videos of yourself online.
- Or, if you do, having an object in front of your face to make it difficult to re-fabricate.
As we said, not very realistic. At this point, perhaps the best way to eliminate the damage of deepfake videos is to spread the word: you can’t take everything you see on the internet at face value.