Home » Artificial Intelligence » On the Road to AI Superintelligence

On the Road to AI Superintelligence

July 25, 2024

supplu-chain

New knowledge is being generated at such a dramatic rate that humans can no longer be expected to absorb and understand it. Pippa Malmgren, Founder and CEO of the Geopolitica Institute, explains, “If the American futurist R. Buckminster Fuller was right, as he always was, then the boundaries of human knowledge are forever expanding. In 1982, Fuller created the ‘Knowledge Doubling Curve,’ which showed that up until the year 1900, human knowledge doubled approximately every century. By the end of the Second World War, this was every 25 years. Now, it is doubling annually.”[1] Artificial intelligence (AI) now helps humans gather, analyze, and understand available knowledge. It continually absorbs new data and learns from it. Nevertheless, Yann LeCun, Vice President and Chief AI Scientist at Meta, insists we need a new approach to AI if humankind is going to take full advantage of insights gleaned from new knowledge. Currently, large language models (LLM) are all the rage; however, LeCun explains, “[LLMs have] very limited understanding of logic.  …  [They] do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan … hierarchically.”[2] Journalists Hannah Murphy and Cristina Criddle report, “[LeCun is] focused instead on a radical alternative approach to create ‘superintelligence’ in machines.”[3]

 

The Road to AI Superintelligence

 

Before AI can achieve superintelligence, what is often called artificial general intelligence (AGI), LeCun believes we need to develop a system that can achieve human-level intelligence. Murphy and Criddle report, “[LeCun] is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence, although he said this vision could take 10 years to achieve.” He told the journalists, “[Achieving AGI is] not a product design problem, it’s not even a technology development problem, it’s very much a scientific problem.” According to Murphy and Criddle, “[LeCun and his team] are working towards creating AI that can develop common sense and learn how the world works in similar ways to humans, in an approach known as ‘world modelling’.” One of the approaches LeCun’s team is using “is feeding systems with hours of video and deliberately leaving out frames, then getting the AI to predict what will happen next. This is to mimic how children learn from passively observing the world around them.”

 

Murphy and Criddle report, “Some experts are doubtful of whether LeCun’s vision is viable. Aron Culotta, associate professor of computer science at Tulane University, said common sense had long been ‘a thorn in the side of AI,’ and that it was challenging to teach models causality, leaving them ‘susceptible to these unexpected failures.’ One former Meta AI employee described the world modelling push as ‘vague fluff,’ adding: ‘It feels like a lot of flag planting.’” Common sense has been, and remains, a challenge. Dave Gunning, a former project manager at DARPA, once noted, “The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences. This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future.”[4]

 

Defining superintelligence or AGI is not straight forward. Jeremy Baum, a researcher at the UCLA Institute for Technology, Law & Policy, and John Villasenor, a Professor of Electrical Engineering, Law, Public Policy, and Management at UCLA, explain, “Artificial general intelligence is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction. … The development of AGI will have a transformative effect on society and create significant opportunities and threats, raising difficult questions about regulation.”[5] Because the “threats” seem so severe, there is often a lot more written about the threats than the “opportunities.”

 

Will AGI Be Humankind’s Downfall

 

Obviously, in a short article, I’m not going to be able explore all the concerns being raised about AGI. Last year, AI pioneer Geoffrey Hinton resigned from Alphabet so he could openly express his concerns. He told Reuter’s, “Artificial intelligence could pose a ‘more urgent’ threat to humanity than climate change.”[6] During that interview he insisted, “I wouldn’t like to devalue climate change. I wouldn’t like to say, ‘You shouldn’t worry about climate change.’ That’s a huge risk too. But I think this might end up being more urgent. … With climate change, it’s very easy to recommend what you should do: you just stop burning carbon. If you do that, eventually things will be okay. For this it’s not at all clear what you should do.” Journalist Martin Coulter notes, “[Hinton] is now among a growing number of tech leaders publicly espousing concern about the possible threat posed by AI if machines were to achieve greater intelligence than humans and take control of the planet.”[7] Hinton’s big concern is with the people using technology more than the technology itself. He told journalist Cade Metz, “It is hard to see how you can prevent the bad actors from using it for bad things.”[8] He added, “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

 

Other AI pioneers, like Jürgen Schmidhuber, Director of the AI Initiative at the King Abdullah University of Science and Technology (KAUST), aren’t quite as concerned as Hinton. Journalist Josh Taylor reports, “[Jürgen Schmidhuber] once described as the father of artificial intelligence is breaking ranks with many of his contemporaries who are fearful of the AI arms race, saying what is coming is inevitable and we should learn to embrace it.”[9] Taylor continues, “The German computer scientist says there is competition between governments, universities, and companies all seeking to advance the technology, meaning there is now an AI arms race, whether humanity likes it or not. ‘You cannot stop it,’ says Schmidhuber. ‘Surely not on an international level, because one country might may have really different goals from another country. So, of course, they are not going to participate in some sort of moratorium. But then I think you also shouldn’t stop it. Because in 95% of all cases, AI research is really about our old motto, which is make human lives longer and healthier and easier.”

 

Another AI pioneer, Fei-Fei Li, a computer scientist at Stanford University, sometimes referred to as the “godmother of AI,” agrees with Schmidhuber. She states, “I [worry] about the overhyping of human extinction risk. I think that is blown out of control.”[10] She adds, “It belongs to the world of sci-fi. There’s nothing wrong about pondering about all this, but compared to the other, actual social risks — whether it’s the disruption of disinformation and misinformation to our democratic process, or, you know, the kind of labor market shift or [privacy] issues — these are true social risks that we have to face because they impact real people’s real life. … There’s so many ways we can use [AI] to make people’s life better, work better. I don’t think we give enough voices to people who are actually out there, in the most imaginary way, creative way of trying to bring good to the world using AI.”

 

Concluding Thoughts

 

Baum and Villasenor conclude, “Whenever and in whatever form it arrives, AGI will be transformative, impacting everything from the labor market to how we understand concepts like intelligence and creativity. As with so many other technologies, it also has the potential of being harnessed in harmful ways. For instance, the need to address the potential biases in today’s AI systems is well recognized, and that concern will apply to future AGI systems as well. At the same time, it is also important to recognize that AGI will also offer enormous promise to amplify human innovation and creativity. In medicine, for example, new drugs that would have eluded human scientists working alone could be more easily identified by scientists working with AGI systems.” Although some people, like the late Paul Allen, are skeptical about a sentient AGI system ever being developed, most computer scientists think we’ll get close. We should prepare for when that occurs.

 

Tim Bajarin, Chairman at Creative Strategies, explains, “The intersection of human and AI interaction is not merely a frontier of technological innovation but a pivotal juncture redefining our relationships and productivity paradigms. As AI integrates deeper into our personal and professional lives, its influence extends beyond efficiency gains, encroaching upon the essence of human experience. This integration challenges traditional notions of creativity, empathy, and interpersonal connections, areas that were once believed to be exclusively human domains.” He concludes, “The shift in the human-AI paradigm makes us rely more on these AI tools to do almost everything for us. ‘Doing’ is the essence of what makes us human. … This evolution also presents an unparalleled opportunity to augment our capabilities, enabling us to achieve higher productivity levels while fostering a new symbiosis between human intuition and artificial intelligence. The key lies in navigating this delicate balance without diminishing the intrinsic values that define our humanity.”

 

Footnotes
[1] Pippa Malmgren, “Will humans survive the rise of the machines?” UnHerd, 20 May 2024.
[2] Hannah Murphy and Cristina Criddle, “Meta AI chief says large language models will not reach human intelligence,” Financial Times, 22 May 2024.
[3] Ibid.
[4] Staff, “Teaching Machines Common Sense Reasoning,” Defense Advanced Research Projects Agency, 11 October 2018.
[5] Jeremy Baum and John Villasenor, “How close are we to AI that surpasses human intelligence?” The Brookings Institute, 18 July 2023.
[6] Martin Coulter, “AI pioneer says its threat to world may be ‘more urgent’ than climate change,” Reuters, 9 May 2023.
[7] Ibid.
[8] Cade Metz, “‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead,” The New York Times, 1 May 2023.
[9] Josh Taylor, “Rise of artificial intelligence is inevitable but should not be feared, ‘father of AI’ says,” The Guardian, 6 May 2023.
[10] Laura Bratton, “The ‘godmother of AI’ says stop worrying about an AI apocalypse,” Quartz, 9 May 2024.
[11] Tim Bajarin, “Will Artificial Intelligence Diminish Our Humanity?” Forbes, 20 May 2024.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!