The rise of social media unleashed a new era of lying, cheating, bullying, and conspiracy theories. Of concern to many people is the fact that, in social media circles, false claims travel faster than truth. Several years ago, Eugene Kiely and Lori Robertson, from FactCheck.org, wrote, “Fake news is nothing new. But bogus stories can reach more people more quickly via social media than what good old-fashioned viral emails could accomplish in years past.” Social media is, in fact, playing a major role in spreading fake news; however, lies have always found a way to speed themselves through society. Way back in 1710, Jonathan Swift, most famous for penning Gulliver’s Travels, wrote in The Examiner, “Falsehood flies, and the truth comes limping after it.” Even Swift, however, might be surprised how fast falsehoods can fly in the Information Age.
Since humans first put pen to parchment, there have been nefarious people willing to spread falsehoods. And we know words can be powerful. Lera Boroditsky, a cognitive scientist at the University of California, San Diego, notes, “By choosing how you frame and talk about something, you are cuing others to think about it in a specific way. We can drastically change someone’s perspective by how we choose to talk about and frame something.” Today’s ideologically-driven media outlets pretending to be sources of unbiased news certainly know that. False words can be strengthened by false images. Today’s image-editing programs have been used to create “Photoshopped” fake pictures and destroyed the old adage “seeing is believing.” In times past, however, humans were needed to Photoshop images.
Today, artificial intelligence (AI) programs can create images on their own. These programs, called Generative AI, don’t necessarily produce fakes but can produce original works of art “in the style” of famous artists. Video journalist Matthew Ashe explains, “Visual artists, designers, illustrators and many other creatives have watched the arrival of AI text-to-image generators with a mix of awe and apprehension. This new technology has sparked debate around the role of AI in visual art and issues such as style appropriation. Its speed and efficiency have triggered fears of redundancy among some artists, while others have embraced it as an exciting new tool.” He adds, “The ethics of AI text-to-image generators have been the subject of much debate. A key issue of concern has been the fact that these AIs can be trained on the work of real, living, working artists. This potentially allows anybody using these tools to create new work in these artists’ signature style.”
From Words and Still Images to Videos, Voices, and Content
Fakery has been around for ages. P.T. Barnum made a good living putting things like fake mermaids on display. In art and antiquities circles, fakes have been a consistent problem. Today, evil-minded people are using the latest artificial intelligence-powered tools to create fakes — and they now have a complete toolkit. For years, people have been using AI-powered video editing tools to create fake videos — often showing celebrities participating in pornographic activities. Back in 1994, the movie Forrest Gump demonstrated to a wide audience how realistic video fakery had become. In that movie, however, the fake video clips were clearly meant for entertainment. More recently, a “deepfake” video of Ukrainian President Volodymyr Zelenskyy urging his military members to lay down their arms demonstrated how deepfakes could be used dangerously. Fortunately, the Zelenskyy deepfake was quickly debunked. Several years ago, journalist Joe Andrews warned that deepfakes were going to become a real problem. He explained, “The threat is called ‘deepfaking,’ a product of AI and machine learning advancements that allows high-tech computers to produce completely false, yet remarkably realistic, videos depicting events that never happened or people saying things they never said.”
In those deepfake videos, AI-generated celebrities were made to say things by either splicing together voice recordings of the people being faked or using a good voice impersonator. Not anymore. Technology journalist Luke Hurst notes, “Just a few days into 2023, another powerful use case for AI has stepped into the limelight — a text-to-voice tool that can impeccably mimic a person’s voice. Developed by Microsoft, VALL-E can take a three-second recording of someone’s voice, and replicate that voice, turning written words into speech, with realistic intonation and emotion depending on the context of the text.” Like text-to-image systems, this text-to-speech system raises ethical concerns. Hurst explains, “The tool is not currently available for public use — but it does throw up questions about safety, given it could feasibly be used to generate any text coming from anybody’s voice.”
And, if you are not sure what you want your fake celebrity to say in their fake voice, you can always turn to ChatGPT to help you generate your content. Actor Ryan Reynolds asked ChatGPT to generate advertising text for a commercial touting his telecommunications company Mint Mobile. During the commercial in which he used the text, Reynolds explained that he asked ChatGPT to generate copy that featured a joke, a swear word, and a reminder that Mint Mobile’s holiday promotion was ongoing. And he asked it to create the text using his style. After reading the result, Reynolds described it as “eerie” and “mildly terrifying.” Technology journalist Harry McCracken writes, “OpenAI’s astoundingly glib bot is going to change the world. Before it does, let’s hope it gets far better at knowing what it’s talking about.” And tech journalist Benj Edwards (@benjedwards) tweeted, “It’s possible that OpenAI invented history’s most convincing, knowledgeable, and dangerous liar — a superhuman fiction machine that could be used to influence masses or alter history.”
For years, I have been stressing the importance of ethics in relation to how artificial intelligence is created and used. Carlos Martin, co-founder and CEO of Macami.ai, agrees with me. He writes, “We’re seeing that social media is giving us more content that reaffirms our beliefs, even if they may be wrong. If you’re someone that believes the moon landing was fake, social media’s AI will find more content to keep you interested. It is cold. It does not care if there are a million other facts that prove otherwise — it will still feed you what you want. Why? It’s simple: money. Those algorithms can feed you more ads by keeping your eyeballs busy with that content. Plain and simple.” His comments underscore the fact that ethics must be applied to both the technology and the people developing and using AI.
Martin goes on to note, “There is, as of today, no commonly accepted methodology to feed data to the current machine learning algorithms. There are also no guardrails that help determine right from wrong in these algorithms. … We need to think of these algorithms in a similar manner as we think of the need of humans to have morality and values, to value fairness and manners. It is hard to think of them as needing a moral compass, but the reality is that these algorithms affect human life, which is not always for the better.”
 Eugene Kiely and Lori Robertson, “How to Spot Fake News,” Factcheck.org, 18 November 2016.
 Dina Fine Maron, “Why Words Matter: What Cognitive Science Says about Prohibiting Certain Terms,” Scientific American, 19 December 2017.
 Matthew Ashe, “DALL-E 2, Stable Diffusion, Midjourney: How do AI art generators work, and should artists fear them?” Euronews, 30 December 2022.
 Bobby Allyn, “Deepfake video of Zelenskyy could be ‘tip of the iceberg’ in info war, experts warn,” NPR, 16 March 2022.
 Joe Andrews, “Fake news is real – A.I. is going to make it much worse,” USA Today, 12 July 2019.
 Luke Hurst, “After ChatGPT and DALL·E, meet VALL-E – the text-to-speech AI that can mimic anyone’s voice,” Euronews, 10 January 2023.
 Peter Adams, “Ryan Reynolds reads from AI-generated script in new Mint Mobile ad,” Marketing Dive, 11 January 2023.
 Harry McCracken, “If ChatGPT doesn’t get a better grasp of facts, nothing else matters,” Fast Company, 11 January 2023.
 Carlos Martin, “Why AI Needs a Strong Moral Compass for a Positive Future,” Spiceworks, 15 November 2022.