Home » Artificial Intelligence » Does AI Pose an Existential Threat?

Does AI Pose an Existential Threat?

April 11, 2024

supplu-chain

The late theoretical physicist Stephen Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”[1] Hawking used the term “full artificial intelligence” to mean artificial general intelligence (AGI) — a system that exceeds human intelligence and is self-aware. The concerns raised by Hawking have only grown in the decade since he gave that interview. As a result, the RAND Corporation assembled a group of experts to discuss the following questions: “What are the potential risks associated with artificial intelligence? Might any of these be catastrophic or even existential? And as momentum builds toward boundless applications of this technology, how might humanity reduce AI risk and navigate an uncertain future?”[2] Interestingly, the RAND experts didn’t express much concern about AGI, but did believe that other forms of AI could risk life as we know it.

 

Dangers Associated with Artificial Intelligence

 

Edward Geist, a policy researcher at RAND, insisted, “AI threatens to be an amplifier for human stupidity.” He explained that stupidity is amplified when machines “do what you ask for — rather than what you wanted or should have asked for.” He also worried that poorly programmed machines could “make the same kind of mistakes that humans make, only faster and in larger quantities.” Nidhi Kalra, a senior information scientist at RAND, claimed, “AI is gas on the fire. I’m less concerned with the risk of AI than with the fires themselves — the literal fires of climate change and potential nuclear war, and the figurative fires of rising income inequality and racial animus. … But I do have a concern: What does the world look like when we, even more than is already the case today, can’t distinguish fact from fiction?” Jonathan Welburn, a senior RAND researcher who studies emerging systemic risks, predicted, “AI will lead to a series of technological innovations, some of which we might be able to imagine now, but many that we won’t be able to imagine. AI might exacerbate existing risks and create new ones.”

 

Benjamin Boudreaux, a RAND policy researcher who studies the intersection of ethics, emerging technology, and security, insists the greatest threat posed by AI is undermining the institutions and systems that form the foundations of civilization. He explains, “AI could pose a significant risk to our quality of life and the institutions we need to flourish. The risk I’m concerned about isn’t a sudden, immediate event. It’s a set of incremental harms that worsen over time. … AI might be a slow-moving catastrophe, one that diminishes the institutions and agency we need to live meaningful lives. This risk doesn’t require superintelligence or artificial general intelligence or AI sentience. Rather, it’s a continuation and worsening of effects that are already happening. … AI seems to promote mistrust that fractures shared identities and a shared sense of reality. There’s already evidence that AI has undermined the credibility and legitimacy of our election system.” The common thread weaving its way through each of these concerns is not AI technology per se, but the bad actors using the technology. You can count Geoffrey Hinton, an artificial intelligence pioneer, among those concerned about evil people using AI for nefarious purposes. He insists, “It is hard to see how you can prevent the bad actors from using it for bad things.”[3] Last year, Hinton quit his post at Google so he could openly express his concerns about AI.

 

Last March, more than a thousand technology leaders and researchers signed an open letter drafted by the Future of Life Institute urging artificial intelligence labs to pause development of their most advanced AI systems. To date, more than 33,000 signatures have been added to the letter. The letter asks, “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” The letter insists, “Such decisions must not be delegated to unelected tech leaders.” I’m not sure we should leave such decisions to politicians either. Today’s politics are characterized by propaganda and untruth. Around the same time the Future of Life Institute’s letter was released, nineteen current and former leaders of the Association for the Advancement of Artificial Intelligence released their own letter. After acknowledging the many benefits of AI, they letter stated, “At the same time, we are aware of the limitations and concerns about AI advances, including the potential for AI systems to make errors, to provide biased recommendations, to threaten our privacy, to empower bad actors with new tools, and to have an impact on jobs.” The letter concluded with a call to action for AI developers “to expand their multiple efforts on AI safety and reliability, ethics, and societal influences, building on the many existing conferences, workshops, and other activities studying both the short-term and longer-term effects of AI on people and society, incentivizing and celebrating strong work on addressing societal and ethical concerns, and integrating topical tracks on responsibilities and ethics into flagship conferences and other scientific meetings.”

 

The Way Ahead?

 

Since the ethical use of AI seems to be the goal, the question remains: If AI is used unethically, who should do the enforcing? As noted above, tech companies and elected officials don’t appear to be the answer — although they should be proactive stakeholders. AI experts Gary Marcus and Anka Reuel argue in favor of an international agency. They explain, “Although current ai systems are capable of spectacular feats they also carry risks. Europol has warned that they might greatly increase cybercrime. … Scientists have warned that these new tools could be used to design novel, deadly toxins. Others speculate that in the long term there could be a genuine risk to humanity itself. … These systems can also be used for deliberate abuse, from disrupting elections (for example by manipulating what candidates appear to say or write) to spreading medical misinformation. … There is plenty of agreement about basic responsible AI principles, such as safety and reliability, transparency, explainability, interpretability, privacy, accountability and fairness. And almost everyone agrees that something must be done — a just-published poll by the Center for the Governance of AI found that 91% of a representative sample of 13,000 people across 11 countries agreed that AI needs to be carefully managed. It is in this context that we call for the immediate development of a global, neutral, non-profit International Agency for AI (IAAI), with guidance and buy-in from governments, large technology companies, non-profits, academia and society at large, aimed at collaboratively finding governance and technical solutions to promote safe, secure and peaceful AI technologies.”[4]

 

The creation of such an agency obviously faces a number of challenges — including political will. Other challenges deal with the complex world of AI itself. Marcus and Reuel admit, “Each domain and each industry will be different, with its own set of guidelines, but many will involve both global governance and technological innovation. … Designing the kind of global collaboration we envision is an enormous job. Many stakeholders need to be involved. Both short-term and long-term risks must be considered. No solution is going to succeed unless both governments and companies are on board, and it’s not just them: the world’s publics need a seat at the table.” Journalist Steven Levy sees the creation of an international agency more of an impossible task than an enormous job. He writes, “It seems a stretch to imagine the United States, Europe, and China all working together on this.”[5] And that’s not putting Russia, Iran, and North Korea into the mix.

 

Concluding Thoughts

 

Marcus and Reuel conclude, “The challenges and risks of AI are, of course, very different and, to a disconcerting degree, still unknown.” I suspect that much of the fear surrounding the future of AI is connected to the “still unknown” dangers that people suspect are lurking in the future. Tech journalists Sam Schechner and Deepa Seetharaman report that even experts can’t agree on what dangers AI may pose to the future of humankind. In fact, they report, “Artificial-intelligence pioneers are fighting over which of the technology’s dangers is the scariest. One camp, which includes some of the top executives building advanced AI systems, argues that its creations could lead to catastrophe. In the other camp are scientists who say concern should focus primarily on how AI is being implemented right now and how it could cause harm in our daily lives.”[6] I hate to conclude that the challenge of how to deal with AI development is intractable, but that may very well be the case.

 

Footnotes
[1] Rory Cellan-Jones, “Stephen Hawking warns artificial intelligence could end mankind,” BBC News, 2 December 2014.
[2] Staff, “Is AI an Existential Risk? Q&A with RAND Experts,” RAND Corporation, 11 March 2024.
[3] Cade Metz, “‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead,” The New York Times, 1 May 2023.
[4] Gary Marcus and Anka Reuel, “The world needs an international agency for artificial intelligence, say two AI experts,” The Economist, 18 April 2023.
[5] Steven Levy, “Gary Marcus Used to Call AI Stupid—Now He Calls It Dangerous,” Wired, 5 May 2023.
[6] Sam Schechner and Deepa Seetharaman, “How Worried Should We Be About AI’s Threat to Humanity? Even Tech Leaders Can’t Agree.” The Wall Street Journal, 4 September 2023.

Related Posts: