
Deepfake
noun \'dēp•fāk\
a technique using technology to superimpose images
and videos onto existing ones, creating a false impression
that the edited version is the original image or video;
also, the edited image or video itself
The age of deepfakes
Quite understandably most people are unaware of how far technology has progressed to create photorealistic deepfake video images of human beings and, for good or evil, its potential applications in the future. What is “deepfake”? Deepfake is AI- and machine learning-based technology used to produce or alter video content so that it presents a person or something that didn't, in fact, occur.
You may have seen deepfakes popping up recently in news stories involving famous politicians (including Donald Trump and Nancy Pelosi), celebrities—and even tech executives, like Mark Zuckerberg. Ironically, at the same time that the Federal government is planning to investigate the abuse of citizens’ privacy by Facebook and other big-tech companies, Mark Zuckerberg himself was the intentional target of a deepfake TV interview in which he was made to appear to gave a sinister speech about the power of Facebook to use data stolen from billions of people to control their lives.
What might be an even more sinister deepfake? Imagine, for instance, November 2, 2020—the day before the presidential election—and a deepfake video shows a presidential candidate saying or doing something unforgivably horrible that changes the outcome of the election before there’s time for the public to learn the truth!
Continued from the emailed newsletter
In a video or on the internet, a face consists of digital information. In the digital world, faces are not immutable. Faces can be “Photoshopped.” Anyone can do it, even smart kids. Big tech is well aware of the fact that their image-manipulation software is a double-edged sword, depending on the user’s goal. Big tech is working on ways to stop uses of their own software for video, audio, and other content manipulation.
Adobe, for example, the company that makes its flagship photo and bitmap-editing software Photoshop, is creating an AI tool to automatically spot image manipulation. For companies like Adobe, fake or deepfake content produced with their creative tools is not only an important ethical issue but could become a serious business issue. Adobe’s Photoshop Liquify is the perfect tool for adjusting digital facial expressions to create fake ones.
The way that Adobe created Photoshop Liquify exemplifies how fake images are created. Adobe’s software engineers trained a neural network on a database of paired faces to create an algorithm that can both spot and create “edited faces” and can restore an edited face to its original unedited appearance. In other words, in the coming digital world it will become harder and harder to know whether a face—or even a body—or any digital information whatsoever—is authentic. For example, how would you know that a photo or video that you saw of a political candidate appearing on Facebook or YouTube was real in the years to come?
It’s not just people editing photos and videos on computers that create deepfakes; a combination of advanced neural networks that can be data-trained with millions of labeled images, and the latest generation of computer chips that can process more than 10,000 images per second, can enable computers that don’t have to be told what to do to create deepfakes—they can supervise and learn for themselves.
In the age of deepfakes, packaging and selling or licensing this combination of potent microprocessors and software to automate deepfake production (probably in the cloud) can become very profitable. Enabling machines to operate 100 times more powerfully than today’s computers to produce deepfakes has value for massive surveillance—think of China’s oppressive monitoring of its ethnic minorities, for example—or even influencing the election of presidential candidates.
After this rather ominous sounding prologue, we will dig deeper into the technology behind deepfakes and emerging defensive tools with brief explanations of algorithms and deep learning. Our recommended author, Kartik Hosanger, does a superb job of explaining what algorithms are and are not. Algorithms in combination with the internet increasingly are making some of the most important decisions in our lives, and enable the negative consequence of deepfakes,
First, the term “algorithm.” An algorithm is a series of steps to get something done written in a language that computers can understand. The job of a programmer is to figure out the exact sequence of steps required to accomplish a task. Thanks to advances in AI, algorithms can assimilate data and learn how to create new sequences of steps. Think, in effect, of the food being cooked almost magically becoming the chef. Machine learning, a subfield of AI, enables machines to learn how to improve the performance of tasks from their own experience.
Algorithms are incorporating more and more AI and machine learning. As a result, they can become autonomous systems capable of providing everything from advice for purchases to diagnostic guidance for doctors, and—more controversial and subject to bias—the ranking of a defendant’s risks for judges, who are evaluating whether criminal defendants will commit more crimes or abscond.
AI-based algorithms not only are here to stay, they have become the means for “deep learning,” which enables deepfakes. Deep learning is a term for digital neural networks (think brains) arranged in multiple layers that can process huge amounts of data to identify patterns that matter—patterns that human brains, even those of terrifically smart programmers, might not find or notice. Deep-learning algorithms thrive on data. The combination of massive datasets, huge amounts of computer processing power, advanced computer chips, and deep-learning algorithms can produce either miracles of modern science, like diagnosing disease, or evil and malicious deepfakes.
For several years AI and deep learning have been used to generate synthetic video, images and audio: deepfakes. In 2018, we saw the emergence of an unprecedented and malicious political threat from these deepfakes. AI-powered synthetic media makes it possible to generate highly realistic audiovisual media of people saying or doing things they have never said or done. But it also is relatively easy even for you or your geeky kids to create deepfake images. Social media apps like Snapchat use face-morphing technology that could be used to create deepfakes. Free and easy-to-use higher-end tools like FakeApp, using open-source software written by Google, lets you realistically generate face swaps. Techniques exist to even generate fake full-body animations (PDF link).
Many Americans already are saying that the creation and spread of made-up news and information is causing significant harm to the nation and needs to be stopped, according to a new Pew Research Center survey. Indeed, more Americans view made-up news as a very big problem for the country than identity theft, terrorism, illegal immigration, racism, and sexism. According to the Pew Research Center, U.S. adults blame political leaders and activists far more than journalists for the creation of made-up news intended to mislead the public. But they misguidedly believe that it is primarily the responsibility of journalists to fix the problem. The public has a great deal more to learn about deepfakes in order to come up with or embrace any realistic solutions.
On June 1, 2019, the Daily Beast website published a story exposing the creator of a now-infamous fake video that appeared to show House Speaker Nancy Pelosi drunkenly slurring her words. The video was created by taking a genuine clip, slowing it down, and then adjusting the pitch of her voice to disguise the manipulation. The story about this video makes it clear that you, too, could probably do it yourself after watching a few YouTube clips about video editing.
But as we’ve explained in this article, more complicated deepfake fabrications require algorithmic techniques to depict people doing things they’ve never done — not just slowing them down or changing the pitch of their voice, but making them appear to say things that they’ve never said at all. The research article mentioned earlier even suggests a technique to generate full-body animations, which could effectively make digital action figures of any famous person.
But the ease with which people can create fake YouTube videos should raise other concerns about another form of deepfakes. YouTube’s recommendation algorithm is programmed to tell viewers which videos they should watch. These recommendations are powered by Google Brain’s AI R&D team and their deep neural networks to learn about everything viewers watch and do on the internet. More than 70% of the time people spend on YouTube is guided by Google Brain. Google generates revenue by directing viewers to videos with ads.
As we have explained, deep learning is the form of AI where algorithms, known as neural networks, learn new skills by processing vast amounts of data. These algorithms, the key elements for creating deepfakes, have found their way into online repositories where computer code is shared and thus can be exploited by amateur developers. All it takes is a laptop with a good graphics processing unit and a little software knowhow to create a believable deepfake.
Concerns about malicious uses of deepfakes are growing to the point where software and systems are being developed to recognize deepfakes of world leaders, including Donald Trump, Theresa May, and Angela Merkel. Deepfake defense tools are being developed that can detect even incredibly subtle flaws in deepfakes and how they were created, which also reveals clues about the creator.
What does it mean for any and all of us that seeing no longer is believing? What does this mean for political systems and even church congregations? What is the shift in perspective and mindset required to rewire ourselves to stop believing what we are hearing and seeing on the internet, from childhood onward, and that everything on the internet could be, at best, some form of misinformation, and at worst, an unprecedented threat to our lives?
As deepfakes continue to improve and potentially become indistinguishable from real video or audio content, we need much more than education and increased skepticism about digital media. The federal government, with the help of the private sector, needs to develop and deploy a deepfake defense strategy and tools essential to implement it. St. James Faith Lab will continue to be in the vanguard of promoting deepfake awareness, technological solutions, robust safeguards, and mitigation initiatives.
Let us know your thoughts and ideas.
—The Rev. Canon Cindy Evans Voorhees
and the Faith Lab Team