Home » Artificial Intelligence » AI Bill of Rights: Enforcement is the Issue

AI Bill of Rights: Enforcement is the Issue

October 6, 2022

supplu-chain

Earlier this week, the Biden Administration issued a Blueprint for an AI Bill of Rights. The announcement was greeted with mixed reviews. According to the announcement, “Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services.”[1] On the other hand, the announcement notes, “These tools now drive important decisions across sectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone.” The Blueprint, which is accompanied by a handbook entitled From Principles to Practice, establishes five principles which can be used by “anyone seeking to incorporate protections into policy and practice.”

 

The five principles identified by the White House Office of Science and Technology Policy are intended to “guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. … These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.” The principles are intended to:

 

• Protect people from unsafe or ineffective automated systems.
• Prevent discrimination by algorithms.
• Safeguard people from abusive data practices and give them agency over how their data is used.
• Ensure people are informed when an automated system is being used.
• Allow users opt out of automated systems.

 

Although the Blueprint is only advisory in nature (i.e., it is nonbinding and doesn’t include any enforcement measures), tech journalist Angus Loten (@angusloten) reports, “Some technology leaders said the White House blueprint could lead to heavy-handed regulation that might risk putting U.S. businesses at a disadvantage.”[2]

 

U.S. is a Regulatory Laggard

 

Journalists Margaret Harding McGill (@margarethmcgill) and Ina Fried (@inafried) insist some type of regulation is required because the tech industry is barreling forward “in an AI free-for-all.”[3] They also note that the Biden Administration is late to the game. They list a number of other organizations that have previously weighed in on this subject. They include:

 

• IBM. “IBM produced a set of principles in 2017, calling for, among other things, AI that can explain itself. ‘Companies must be able to explain what went into their algorithm’s recommendations. If they can’t, then their systems shouldn’t be on the market,’ IBM said at the time.”

 

• European Union. “The EU released its list of guidelines back in 2019.” According to the Guidelines, trustworthy AI should be: (1) lawful — respecting all applicable laws and regulations; (2) ethical — respecting ethical principles and values; and, (3) robust — both from a technical perspective while taking into account its social environment.

 

• The Vatican. In 2020, “The Vatican published … what it dubbed an ‘algor-ethical’ framework saying that AI systems need to be designed to protect ‘the rights and the freedom of individuals so they are not discriminated against by algorithms.'”

 

• Trump Administration. “In 2020, the Trump administration outlined 10 regulatory principles for agencies writing rules for the technology, warning against over-regulating the systems.” Those principles were aimed at obtaining input from the public and experts about new regulations and basing decisions on scientific evidence; avoiding heavy regulation by conducting cost-benefit analysis and risk assessments, and coordinating with other federal agencies to keep policies consistent; and, promoting trust in AI by taking non-discrimination, safety, transparency and fairness into account in any regulatory action.

 

According to McGill and Fried, “The tech industry is divided between some companies that say they are seeking to develop AI responsibly and others that believe in advancing the technology as quickly as possible regardless of potential problems. In this game, the fast deployers effectively rule out the possibility that voluntary guard rails might work.” Tech journalist Khari Johnson (@kharijohnson) adds, “The limited bite of the White House’s AI Bill of Rights stands in contrast to more toothy AI regulation currently under development in the European Union.”[4]

 

The Way Ahead

 

The White House Blueprint is unlikely to ease the tension between people concerned about the possible ill-effects of unethical AI and people who believe heavy-handed regulations will stifle innovation and economic growth. Nevertheless, some type of AI regulation is inevitable. Annette Zimmermann (@DrZimmermann), who researches AI, justice, and moral philosophy at the University of Wisconsin-Madison, told Johnson “she’s impressed with the five focal points chosen for the AI Bill of Rights, and that it has the potential to push AI policy and regulation in the right direction over time. But she believes the blueprint shies away from acknowledging that in some cases rectifying injustice can require not using AI at all. … Zimmerman would also like to see enforceable legal frameworks that can hold people and companies accountable for designing or deploying harmful AI.”

 

Mark Surman, executive director of the Mozilla Foundation,(@msurman), agrees with Zimmerman that the Blueprint should be expanded “into something formal and enforceable.”[5] He argues, “The AI systems that permeate our lives are often built in ways that directly conflict with these principles. They’re built to collect personal data, to be intentionally opaque, and to learn from existing, frequently biased data sets.” On the other hand, Eric Schmidt (@ericschmidt), former chief executive of Alphabet Inc.’s Google, believes a cautious approach to regulation is warranted. He told Loten, “There are too many things that early regulation may prevent from being discovered.”

 

Privacy rights advocates generally see the Blue as a good start. Marc Rotenberg (@MarcRotenberg), President and Founder of the Center for AI and Digital Policy, calls the Blueprint’s principles “impressive.”[6] He adds, “This is clearly a starting point. That doesn’t end the discussion over how the US implements human-centric and trustworthy AI. But it is a very good starting point to move the US to a place where it can carry forward on that commitment.” With the EU poised to regulate AI in significant ways, I suspect the US will eventually follow suit. Regulatory alignment simply makes sense. How the industry is regulated is where disagreements are likely to continue. As we all know, the devil is in the details.

 

Footnotes
[1] Office of Science and Technology Policy, “Blueprint for an AI Bill of Rights,” The White House, 4 October 2022.
[2] Angus Loten, “White House Issues ‘Blueprint for an AI Bill of Rights’,” The Wall Street Journal, 4 October 2022.
[3] Margaret Harding McGill and Ina Fried, “White House’s AI “Bill of Rights” enters crowded field,” Axios, 4 October 2022.
[4] Khari Johnson, “Biden’s AI Bill of Rights Is Toothless Against Big Tech,” Wired, 4 October 2022.
[5] Loten, op. cit.
[6] Melissa Heikkilä, “The White House just unveiled a new AI Bill of Rights,” MIT Technology Review, 4 October 2022.

Related Posts: