Home » Artificial Intelligence » Policymakers and Artificial Intelligence

Policymakers and Artificial Intelligence

December 30, 2021

supplu-chain

Technology always outpaces policy. It’s inevitable. Policymakers don’t have a crystal ball enlightening them about what technologies are emerging or how those technologies could affect society. As a result, history is their only guide and history is not always reliable. Steve Ritter (@stephenjritter), Chief Technology Officer of Mitek Systems, observes, “Disruptive technology has always challenged legislators.”[1] Even when policymakers have a reliable history on which to draw, they are often slow to act. Ritter points to the fact that it took 60 years after the first Ford Model T was introduced for lawmakers to mandate seat belts. As a result, Ritter notes, “Laws are reactionary by nature: the result of governments legislating against negative use cases once they become apparent.”

 

Ritter doesn’t believe policymakers can wait 60 years to enact legislation concerning artificial intelligence (AI). Like many other experts, he believes regulation is essential to ensure the development of AI proceeds in a way that benefits society more than it puts society at risk. He observes, “The nation’s leading scientists believe that artificial intelligence is such a risk that we need another Bill of Rights to protect what makes us human.” The problem is, a national focus is the wrong focus. Technology has never been be able to be contained within national borders. Passing legislation in one nation does nothing to ensure that individuals in other nations will act ethically. That’s why many experts are calling for international cooperation on artificial intelligence. An article published by the Brookings Institution notes, “At least 60 countries have adopted some form of policy for artificial intelligence.”[2] That’s less than a third of world’s 195 countries and the policies involved are not consistent. On the other hand, the article notes:

 

Work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development. Several other international organizations have become active in developing proposed frameworks for responsible AI development. In addition, there has been a proliferation of declarations and frameworks from public and private organizations aimed at guiding the development of responsible AI.

 

These efforts require some kind of alignment so that the scientific community has clear direction in their efforts to develop artificial intelligence systems.

 

What Are the Issues?

 

In the public’s mind, the most concerning issues involve the development of Artificial General Intelligence (AGI). An AGI system would be one capable of thinking like a human. It would be self-aware. And many pundits believe such a system would be a threat to humankind. Today, AI systems fall far short of AGI capabilities; nevertheless, critics are concerned about the ethical implications of using AI in solutions that recommend products, drive cars, diagnose medical conditions, and make other decisions that can adversely affect someone’s life.

 

Here’s the rub. Too little regulation could result in widespread misuse of AI. Too much regulation could result in less innovation, fewer benefits, and stunted economic growth. For example, Benjamin Mueller (@Ben_CDI), a senior policy analyst at the Center for Data Innovation, predicts, “If adopted, the EU’s Artificial Intelligence Act [AIA] will be the world’s most restrictive regulation of the development and use of artificial intelligence tools. It will not only limit AI development and use in Europe but impose significant costs on EU businesses and consumers. The AIA will cost the European economy €31 billion over the next five years and reduce AI investments by almost 20 percent.”[3]

 

Kilian Gross, Head of Unit for Artificial Intelligence Policy Development and Coordination at DG CONNECT and one of the lead authors of the proposed EU law, disagrees with that assessment. During a panel discussion in June 2021, he made it “clear that the [European] Commission recognizes the huge potential AI offers and wants to encourage its development in Europe.”[4]  Gross stressed, “About 80 to 85 percent of AI does not need significant regulation.” AI features that do need regulation, he argued, include “opacity, difficulty to predict outcomes, challenges in explaining decisions, and data intensity.” He believes these features “require a law that addresses and mitigates potential violations to the fundamental rights and the safety of European citizens.”

 

What is Action Required?

 

Darrell M. West, Vice President and Director of governance studies at the Brookings Institution, asserts, “AI is the transformative technology of our time. It is being deployed in many different areas and is altering how people communicate, work and learn. It offers a number of benefits but also poses considerable risks.”[5] Ritter adds, “Artificial Intelligence is a uniquely human technology. No technology knows more about us. On the flip side, this means it is excellent at manipulating humans.” Ritter believes that the EU’s AIA, if enacted, “could serve as a basis for other A.I. legislation around the world.” He goes on to note this could be problematic. He writes:

 

The EU’s initial draft has caused significant debate. Critics on both sides of the argument point to vague language that might ultimately be defined by courts. The phrasing of concepts, for example, ‘manipulative A.I.,’ or ‘programs which can cause physical or emotional harm,’ has caused concerns. One nuance the text cleverly holds is a separation of impact and intent. This begets an understanding that A.I. can have real-life impacts, regardless of the intent. The EU approach looks to categorize different ‘behaviors,’ enforcing different requirements for A.I. looking to influence each behavioral subset. For example, the bar is set higher for algorithms influencing banking and financial behavior than for supply chain management.

 

In spite of the challenges associated with the EU’s AIA, Mark Rolston (@markrolstonargo), Founder & Chief Creative Officer at argodesign, believes regulation is essential for the world economy to move forward. He explains, “Many longstanding ethical and regulatory concerns surrounding AI have been hanging over the heads of users, businesses, and the public for years. If improperly handled, AI can develop unethical biases, undermine legal and regulatory norms, and blur the lines of organizational accountability. With AI growing in use in consumer-facing contexts such as lending, fraud detection, hiring, and healthcare, it’s vital to address the risks of the technology head-on to ensure the public is protected and also to give businesses and investors confidence about the future of AI.”[6]

 

Concluding Thoughts

 

Ritter admits that a global approach to AI policy would be the best. He also recognizes there are roadblocks to an international approach. “The availability of an existing policy framework raises the question of whether the U.S. should develop a separate regulation or instead work with other global powers on a unified approach,” he writes. “While a unified policy would of course reflect today’s global internet, the actual policing and enforcement of a global framework would be close to impossible.” As a result, he suggests the U.S. move ahead with its own legislation. He explains, “A.I. is too powerful a technology for us to wait and see. The A.I. genie may already be out of the bottle, but it’s never too late to codify what our digital rights should be.” Rolston agrees. He notes that, in addition to the organizations developing AI policies mentioned at the beginning of this article, the Linux Foundation’s AI & Data group, the World Economic Forum’s Global AI Action Alliance, and the Responsible AI Institute, are also working on developing policies to address AI concerns.

 

He observes, “Common principles shared by many of these frameworks include a commitment to AI’s decisions being explainable, transparent, reproducible, and private.” He believes national policies can use these shared frameworks to help increase international alignment. Like Ritter, he understands enforcement could be a problem. He writes, “A major question that arises is how these frameworks are going to be implemented and enforced in practice.” Noting that national and international regulations are still a way off, he suggests industry organizations should take the lead in establishing regulations that can complement laws as they are enacted. “By ensuring that companies working on and with AI have a clear set of standards to follow,” he explains, “industry can help to build confidence in AI among itself, the public and investors.” He concludes, “If AI is to flourish as a sector and provide real societal value, then it’s essential that such certification takes off as a norm around the world.”

 

While I agree that industry standards and certifications will help, a legal framework, enforceable within national boundaries, will eventually be required. The sooner such legislation is enacted, the more confidently companies and computer scientists can move forward with their efforts.

 

Footnotes
[1] Steve Ritter, “The U.S. urgently needs an A.I. Bill of Rights,” Fortune, 12 November 2021.
[2] Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, Alex Engler, and Rosanna Fanni, “Strengthening international cooperation on AI,” The Brookings Institution, 25 October 2021.
[3] Benjamin Mueller, “How Much Will the Artificial Intelligence Act Cost Europe?” Center for Data Innovation, 26 July 2021.
[4] Benjamin Mueller, “Recap: What’s Next on the EU’s Proposed AI Law?” Center for Data Innovation, 11 June 2021.
[5] Mike Miliard, “What are the upcoming policies that will shape AI – and are policymakers up to the task?” Health IT News, 11 November 2021.
[6] Mark Rolston, “Why we need certification for responsible AI,” Computing, 26 May 2021.

Related Posts: