By now most people are aware fake news stories helped sway voter feelings during the last U.S. presidential election. Serious danger lurks when people’s beliefs are untethered from the truth; yet, studies have shown we often can’t help believing things we read. British psychologist Dr. Jeremy Dean explains, “You shouldn’t believe everything you read, yet according to a classic psychology study at first we can’t help it.”[1] He asks an interesting question: “What is the mind’s default position: are we naturally critical or naturally gullible?” My first inclination is to say it depends on the person. It appears some people have a trusting nature while others are natural skeptics. Dean asserts, however, “All our minds are built with the same first instinct, the same first reaction to new information. But what is it?” Dean notes the debate is at least 400 years old, but modern studies have demonstrated the Dutch philosopher Baruch Spinoza was probably correct. According to Dean, “[Spinoza] thought that the very act of understanding information was believing it. We may, he thought, be able to change our minds afterwards, say when we come across evidence to the contrary, but until that time we believe everything.” That’s a problem; especially since so many people seem unwilling to search for contrary evidence.
Eliminating fake new stories is an impossible task because there are monetary and political incentives to create them. To counter these stories, my company, Enterra Solutions® is partnering with VIDL (Vital Intelligence Data Live) News to develop and implement a proprietary ‘Truth in News’ AI platform. The application of machine learning to breaking news and editorial stories is intended to bring consumer trust back to the media marketplace by analyzing third-party news stories, social media posts and external data and providing users with accurate information. Unfortunately, fake news includes more than fake stories.
Beyond the headlines
There is an idiom that probably traces back to pre-history: I’ll believe it when I see it. In the New Testament, the Apostle Thomas epitomized this saying when he insisted he wouldn’t believe Jesus had risen from the grave unless he saw him with his own eyes. His skepticism earned him the nickname Doubting Thomas. “Seeing is believing” has probably never been true — mystics and magicians have made good livings hoaxing people throughout history using marvelous illusions. Doctored photographs have played a role in getting people to believe in lake monsters and alien spacecraft. Unfortunately, hoaxsters are getting ever more sophisticated thanks to artificial intelligence (AI).
Fake Photos
“Fraudulent images have been around for as long as photography itself,” writes Lawrence Greenemeier. “Photoshop ushered image doctoring into the digital age. Now artificial intelligence is poised to lend photographic fakery a new level of sophistication, thanks to artificial neural networks whose algorithms can analyze millions of pictures of real people and places — and use them to create convincing fictional ones.”[2] The process used to create these fake images is called generative adversarial networks, or GANs. “Highly realistic AI-generated images and video hold great promise for filmmakers and video-game creators needing relatively inexpensive content,” explains Greenemeier. “It remains to be seen whether online mischief makers — already producing fake viral content — will use AI-generated images or videos for nefarious purposes. At a time when people increasingly question the veracity of what they see online, this technology could sow even greater uncertainty.”
Fake Videos
Anyone who has gone to the movies in recent years has to marvel at the realism now found in computer-generated images (CGI). We understand much of what we see in movie theaters isn’t real. We don’t entertain the same level of skepticism in real life. Stephen Schmidt, Christie Taylor, and Brandon Echter write, “Behold the next trend in skewed reality that experts say could threaten US democracy: fake videos that appear authentic by embedding real people’s faces onto other bodies through artificial intelligence algorithms. It has sparked a debate on how to verify videos shared online.”[3] Like so many other stories about fake news, Schmidt, Taylor, and Echter point to the 2016 presidential campaign. “This phenomenon,” they write, “also began during the presidential campaign. People began slicing videos to falsely make it look as if events took place.” They explain the videos are “created using a machine-learning algorithm. It works by taking a data set with hundreds of photos of one person and blending them into original video footage where the person’s face is pasted onto another person’s body. Recently, an app was released that could help anyone achieve this result.” Ariel Bogle (@arielbogle), citing American law professors Bobby Chesney and Danielle Citron on the Lawfare blog, cautions, “Doctored videos could show politicians ‘taking bribes, uttering racial epithets, or engaging in adultery’. … Even a low-quality fake, if deployed at a critical moment such as the eve of an election, could have an impact.”[4]
Fake Voices
“In 2018,” writes technologist William Welser (@WilliamWelserIV), “fears of fake news will pale in comparison to new technology that can fake the human voice. This could create security nightmares. Worse still, it could strip away from each of us a part of our uniqueness.”[5] It’s easy to see how faked pictures, video, and voices could be used for such criminal purposes as blackmail. Welser’s concern goes much deeper. He explains, “A nefarious actor may easily be able to create a good enough vocal impersonation to trick, confuse, enrage or mobilize the public. Most citizens around the world will be simply unable to discern the difference between a fake Trump or Putin soundbite and the real thing. When you consider the widespread distrust of the media, institutions and expert gatekeepers, audio fakery could be more than disruptive. It could start wars. Imagine the consequences of manufactured audio of a world leader making bellicose remarks, supported by doctored video. In 2018, will citizens — or military generals — be able to determine that it’s fake?”
Summary
Welser concludes, “The biggest loss caused by AI will be the complete destruction of trust in anything you see or hear.” Swapna Krishna (@skrishna) asks, “If AI can be used to face swap, can’t it also be used to detect when such a practice occurs?”[6] For that matter, can’t AI be used to fight all forms of fake news? That’s what companies like VIDL News are trying to do. And, according to Krishna, other companies are trying to use AI to fight fakery as well. She reports a new paper on arXiv.org discusses a new algorithm that has the potential to identify forged videos as soon as they are posted online. In this AI battle, the fight will escalate as technologies become more sophisticated. Returning to the subject of our gullibility, Dean notes, “Spinoza’s approach is unappealing because it suggests we have to waste our energy rooting out falsities that other people have randomly sprayed in our direction, whether by word of mouth, TV, the internet or any other medium of communication.” AI will help root out falsities, but people still have to care enough to seek out the truth.
Footnotes
[1] Jeremy Dean, “Why You Can’t Help Believing Everything You Read,” Psyblog, 17 September 2009.
[2] Lawrence Greenemeier, “Spot the Fake: Artificial Intelligence Can Produce Lifelike Photographs,” Scientific American, 1 April 2018.
[3] Stephen Schmidt, Christie Taylor, and Brandon Echter, “AI-based fake videos pose the latest threat to what we perceive as reality — and possibly our democracy,” KERA News, 18 March 2018.
[4] Ariel Bogle, “‘Deep fakes’: How to know what’s true in the fake-Obama video era,” ABC (Australian Broadcast Commission) News, 3 March 2018.
[5] William Welser IV, “Fake news 2.0: AI will soon be able to mimic any human voice,” Wired, 8 January 2018.
[6] Swapna Krishna, “Researchers use machine learning to quickly detect video face swaps,” Engadget, 11 April 2018.