If you are a movie lover, you have probably marveled over the sophistication of computer-generated images (CGI). Dinosaurs and aliens have never looked more real on screen. We know those images are not real; however, advanced image technology can be used for nefarious purposes in the real world. If you are following the war in Ukraine, you probably heard about the “deepfake” video of Ukrainian President Volodymyr Zelenskyy urging his military members to lay down their arms. Fortunately, it was quickly debunked. Deepfakes are getting so realistic that we can no longer say, “Seeing is believing.” Several years ago, journalist Joe Andrews warned that deepfakes were going to become a real problem. He explained, “The threat is called ‘deepfaking,’ a product of AI and machine learning advancements that allows high-tech computers to produce completely false, yet remarkably realistic, videos depicting events that never happened or people saying things they never said.” To underscore his argument, the following video accompanied his article.
Andrews notes, “Deepfake technology is allowing organizations that produce fake news to augment their ‘reporting’ with seemingly legitimate videos, blurring the line between reality and fiction like never before — and placing the reputation of journalists and the media at greater risk.” If you think that deepfakes are only a problem because they can spread lies and misinformation, Sam Gregory (@SamGregory) of the human rights group Witness, indicates an even greater problem is that they make people suspicious of the truth. He explains, “The particular issue is around the so-called liar’s dividend, where it’s easy to claim a true video is falsified and place the onus on people to prove it’s authentic.” Hany Farid, a professor at the University of California and an expert in digital media forensics, adds, “It pollutes the information ecosystem, and it casts a shadow on all content.”
Using AI to Counter Deepfakes
Ronald van Loon (@Ronald_vanLoon), CEO of Intelligent World, reports that deepfakes are a growing challenge. “Though in many cases deepfakes manifest as harmless memes or clever marketing campaigns,” he writes, “deepfake technologies are a growing cultural, political, economic, social, and business risk with the power to cause harm. The implications of deepfakes are disturbing, from spreading disinformation and inflicting reputational damage on political and public figures, to corporate espionage, and cyberattacks. Dedicated deepfake communities and sites are proliferating that even enable consumers to commission custom deepfakes.” According to many experts, countering deepfakes will require fighting fire with fire (i.e., using AI to detect AI-created deepfakes). Van Loon goes on to discuss some of the efforts currently underway to fight deepfakes. They are:
• The DeepFake Challenge. “The DeepFake Detection Challenge (DFDC), a competition created by AWS, Microsoft, Facebook, the Partnership on AI, and academics, was run on Kaggle and offered a $1 million prize to global researchers who could develop innovative technologies to aid in detecting Deepfakes and manipulated media. It garnered over 2,000 participants and generated over 35,000 deepfake detection models.”
• MIT Detect Fakes. “Detect Fakes is a MIT research initiative that strives to pinpoint methods to counteract AI-generated misinformation, and features videos that prompt participants to practice if they can discern a DeepFake from a real video.”
• UC Berkeley/Stanford Lip-sync Research. “Researchers from UC Berkeley and Stanford created an AI-driven approach to detect lip-sync technology, which is able to identify 80 percent of fakes by understanding the misalignment between the shapes of people’s mouths and the sounds they make when they speak.”
• Microsoft Deepfake Detection Tool. “Microsoft released a commercial deepfake detection tool which analyzes video frames and generates a software confidence score indicating if the frame is real or AI-produced. Notably, it was made accessible to various companies who monitored the 2020 U.S. elections.”
• Intel and Binghamton University Research. “Research teams from Intel and the Graphics and Image Computing lab at Binghamton University developed a tool that uses biological signals and data to identify and classify deepfakes with 96 percent accuracy. The tool is based on the idea that while facial videos can be synthesized, subtle physiological signals like heart rate fluctuations and blood flow that exhibit as pixel color changes, can’t be easily reproduced.”
Van Loon cautions, “Though innovations are emerging to potentially identify deepfakes, most remain in the research or development stages, and some authorities even caution that there might not be a long-term, technically-driven solution for deepfakes.” And that’s a problem. Rob Toews (@_RobToews), a venture capitalist at Radical Ventures, explains, “Today’s deepfake technology is still not quite to parity with authentic video footage — by looking closely, it is typically possible to tell that a video is a deepfake. But the technology is improving at a breathtaking pace. Experts predict that deepfakes will be indistinguishable from real images before long.”
Toews reports, “The amount of deepfake content online is growing at a rapid rate. At the beginning of 2019 there were 7,964 deepfake videos online, according to a report from startup Deeptrace; just nine months later, that figure had jumped to 14,678. It has no doubt continued to balloon since then. … As of September 2019, 96% of deepfake videos online were pornographic, according to the Deeptrace report.” Concerns about deepfakes go well beyond the pornographic sector. Toews cites a report from the Brookings Institution that “grimly summed up the range of political and social dangers that deepfakes pose.” The report concluded, “[Deepfakes could end up] distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
If you are a fan of America’s Got Talent, you probably watched a deepfake Simon Cowell sing live in front during one of the show’s auditions. Although the technology wasn’t perfect, the judges and the audience were blown away by the technology.
Even if surefire deepfake detection software is developed, damage can still be done. Deepfakes are easy to spread via social media. Half a dozen years ago, Eugene Kiely (@ekiely) and Lori Robertson reported, “Bogus stories can reach more people more quickly via social media than what good old-fashioned viral emails could accomplish in years past.” Social media is, in fact, playing a major role in spreading fake news (including deepfakes); but, lies have always found a way to speed themselves through society. Way back in 1710, Jonathan Swift, most famous for penning Gulliver’s Travels, wrote in The Examiner, “Falsehood flies, and the truth comes limping after it.” Even Swift, however, might be surprised how fast falsehoods can fly in the Information Age.
 Bobby Allyn, “Deepfake video of Zelenskyy could be ‘tip of the iceberg’ in info war, experts warn,” NPR, 16 March 2022.
 Joe Andrews, “Fake news is real – A.I. is going to make it much worse,” USA Today, 12 July 2019.
 Allyn, op. cit.
 Ronald van Loon, “The Role of AI in Identifying Deepfakes,” Simplilearn, 16 March 2022.
 Rob Toews, “Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared.” Forbes, 25 May 2020.
 Eugene Kiely and Lori Robertson, “How to Spot Fake News,” Factcheck.org, 18 November 2016.
 Joshua Gillin, “NFL’s Colin Kaepernick incorrectly credits Winston Churchill for quote about lies,” Pundit Facts, 9 October 2017.