For centuries, April Fools’ Day has been a day dedicated to pranks and laughs. It’s a day when people looking for serious news need to fact-check anything they read, watch, or hear — because some of the news is deliberate, just-for-fun fake news. In years past, companies have joined in on the fun. For example, in April Fools’ Day advertisements, Open Table once claimed it had created an app that let you taste a restaurant’s food by simply licking your phone. Liquor manufacturer Pimm’s claimed it had secured rights to advertise on the clock face commonly referred to as Big Ben. Burger King once claimed to have created both a left-handed Whopper and a chocolate Whopper; Taco Bell claimed to have purchased the Liberty Bell; and Procter & Gamble, maker of Scope mouthwash, once claimed it had created a bacon-flavored mouthwash.
Even some legitimate media outlets are also willing to play the prankster game: PC Computing magazine published an article insisting Congress was considering legislation that made internet surfing while drunk a federal crime; the BBC once aired a 4-minute segment on how the Swiss were having a great harvest from their spaghetti trees; and, the fun-loving BBC also claimed to have perfected smell-o-vision and reported that Big Ben was going digital. If you’re committed to pranking someone today, freelance writer Jamie Ballard (@BallardJamie23) offers the following advice, “Know your audience. You want them to be laughing with you, so don’t pull a prank on someone who’s totally going to hate it. Also, keep your jokes lighthearted. It’s not funny to seriously stress someone out or hurt anyone, either physically or emotionally. And if you’re going to pull pranks on your coworkers, make sure it’s not something that’s going to get anyone in trouble. Basically, just be thoughtful about it, ok?”
When Spoofing Turns Ugly
Not all tricks or hoaxes are created with laughter in mind. With advances in artificial intelligence (AI) and deep-fake technology, some spoofs can be harmful — even deadly. Several years ago, cybersecurity journalist Catherine Stupp (@catstupp) reported, “Criminals used artificial intelligence-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($243,000) in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking. The CEO of a U.K.-based energy firm thought he was speaking on the phone with his boss, the chief executive of the firm’s German parent company, who asked him to send the funds to a Hungarian supplier. The caller said the request was urgent, directing the executive to pay within an hour, according to the company’s insurance firm, Euler Hermes Group SA.”
Losing money is bad, but spoofing can also turn deadly. Shortly after Russia invaded Ukraine, a video appeared in which Ukrainian president Volodymyr Zelensky asked soldiers to lay down their weapons and return to their families. In the video, Zelensky states, “Dear Ukrainians! Being president wasn’t that easy. There is no tomorrow. At least not with me. I suggest you pack up your guns and go back to your families.” Both the picture and the sound were fake. Journalists Tobias Bolzern (@TobiasBolzern) and Jonas Bucher report, “The clip was shared across Ukraine 24’s broadcaster and social media channels. The TV station then shared a warning and announced that they had been hacked. ‘The report about the capitulation is false. A fake. We were hacked by hostile hackers,’ it said in a statement.” Had Ukrainian military forces believed the deepfake video, the consequences for the country could have been severe.
Joshua New (@Josh_A_New), a Technology Policy Executive at IBM, observes, “The risks posed by deepfakes, a portmanteau of ‘deep learning’ and ‘fake,’ fall into two camps: that this technology will intrude on individual rights, such as using a person’s likeness for profit or to create pornographic videos without their consent; and that this technology could be weaponized as a disinformation tool.” The faked Zelensky video is a deadly example of the latter. Daniel Castro (@castrotech), Director of the Center for Data Innovation, writes, “As the tools to produce this synthetic media advance, policymakers are scrambling to address public concerns, and state lawmakers in particular have put forth several proposals this year to respond to deepfakes. Most of these laws generally take the right approach: They make it unlawful to distribute deepfakes with a malicious intent, and they create recourse for those in their state who have been negatively affected by bad actors. However, it is important that lawmakers carefully craft these laws so as not to erode free speech rights or undermine legitimate uses of the technology.”
New cautions, however, “As deepfake technology matures and proliferates, policymakers should recognize that these tools will soon be common place. Though some rules restricting the creation and distribution of deepfakes, and software to produce deepfakes, may be worthwhile, policymakers should not view this strategy as a silver bullet for stopping the threat deepfakes pose, no matter how strict these rules are.” In other words, knowing what is real and what is fake is only going to get harder to distinguish in the years ahead.
Artificial intelligence not only has the ability to create deepfakes of real people it can create realistic faces of people who don’t exist. In an interesting twist, a recent study found, “Fake faces created by AI are considered more trustworthy than images of real people.” The reason for this, according to journalist Victoria Masterson, is that “AI learns the faces we like” and, as a result, creates faces matching our preferences. She reports, the World Economic Forum insists AI is creating “new and complex ethical issues.” Deepfakes aren’t the only AI concerns that should worry people.
The staff at Wired magazine note that AI facial recognition software can potentially place a person at the scene of crime when, in fact, they were never there. They write, “Like a lot of tech solutions to complex problems, facial recognition algorithms aren’t perfect. But when the technology is used to identify suspects in criminal cases, those flaws in the system can have catastrophic, life-changing consequences. People can be wrongly identified, arrested, and convicted, often without ever being told they were ID’d by a computer. It’s especially troubling when you consider false identifications disproportionately affect women, young people, and people with dark skin — basically everyone other than white men.” I guarantee that being falsely accused of crime by an AI system is no laughing matter.
Society must be ever more vigilant as a result of the nefarious characters among us who deliberately use the latest technology to do harm. We must treat every day as if it’s April Fools’ Day. Nevertheless, if you’re a prankster, go ahead and have some fun today. Make sure, however, that your victims are in on the joke and can share a good laugh with you. We can all use laughter in our lives. Happy April Fools’ Day.
 Jamie Ballard, “15 Best April Fools’ Pranks to Pull on Everyone This Year,” Cosmopolitan, 17 March 2022.
 Catherine Stupp, “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case,” The Wall Street Journal, 30 August 2019.
 Tobias Bolzern and Jonas Bucher, “‘Put down your arms’ – fake Selenski causes chaos with deepfake,” Archyworldys, 19 March 2022.
 Joshua New, “Deepfakes Deserve Policymakers’ Attention, and Better Solutions,” Center for Data Innovation, 12 September 2019.
 Daniel Castro, “Deepfakes Are on the Rise — How Should Government Respond?” Center for Data Innovation, 13 January 2020.
 Victoria Masterson, “People trust AI fake faces more than real ones, according to a new study,” World Economic Forum, 15 March 2022.
 Staff, “The AI Placed You at the Crime Scene, but You Weren’t There,” Wired, 18 March 2022.