AI EU Oh!

Stephen DeAngelis

May 20, 2021

Professionals in the field of artificial intelligence (AI) are keenly aware of recent moves by the European Union (EU) to regulate how and when AI can be used throughout Europe. In late April, EU officials proposed new regulations for high-risk uses of artificial intelligence, like facial scanning. Legal scholar Valeria Marcia and Brookings Senior Fellow Kevin C. Desouza (@KevDesouza) write, “Although the advantages of AI in our daily lives are undeniable, people are concerned about its dangers. Inadequate physical security, economic losses, and ethical issues are just a few examples of the damage AI could cause. In response to AI dangers, the European Union is working on a legal framework to regulate artificial intelligence.”[1] Journalist Angus Loten (@angusloten) reports corporate leaders have greeted the news with mixed feelings. He writes, “Some corporate technology leaders say a proposed clampdown by European regulators on the use of artificial intelligence will run up costs and stifle innovation, just as companies are starting to unlock its potential. Others say stronger oversight will help build public trust in AI systems, which have inflamed tensions over data privacy, consumer protection and misuse — especially in areas like facial recognition.”[2]

 

What the EU Proposals Entail

 

Journalists Sam Schechner and Parmy Olson report, “European officials want to limit police use of facial recognition and ban the use of certain kinds of AI systems, in one of the broadest efforts yet to regulate high-stakes applications of artificial intelligence. The European Union’s executive arm proposed a bill that would also create a list of so-called high-risk uses of AI that would be subject to new supervision and standards for their development and use, such as critical infrastructure, college admissions and loan applications. Regulators could fine a company up to 6% of its annual world-wide revenue for the most severe violations, though in practice EU officials rarely if ever mete out their maximum fines. The bill is one of the broadest of its kind to be proposed by a Western government, and part of the EU’s expansion of its role as a global tech enforcer.”[3]

 

Schechner and Olson note the “proposal faces a long road — and potential changes — before it becomes law. In the EU, such laws must be approved by both the European Council, representing the bloc’s 27 national governments, and the directly elected European Parliament, which can take years.” Like corporate executives, they report digital-rights activists have mixed reactions to the proposal. They explain, “Some digital-rights activists, while applauding parts of the proposed legislation, said other elements appear too vague and offer too many loopholes.”

 

What the Regulations Would Mean for Business

 

Schechner and Olson report, “There are a handful of specific practices that face outright bans in the bill. In addition to social credit systems, such as those used by the Chinese government, it also would ban AI systems that use ‘subliminal techniques’ or take advantage of people with disabilities to ‘materially distort a person’s behavior’ in a way that could cause physical or psychological harm. While police would be generally blocked from using what is described as ‘remote biometric identification systems’ — such as facial recognition — in public places in real time, judges can approve exemptions that include finding abducted children, stopping imminent terrorist threats and locating suspects of certain crimes, ranging from fraud to murder.” Christian Borggreen, vice president and head of the Brussels office at the Computer & Communications Industry Association, which represents a number of large technology companies including Amazon, Facebook and Google, believes the EU got some things correct. He told the reporters, “It’s positive that the commission has taken this risk-based approach.” However, not everyone is pleased.

 

In a world in which technology demonstrates little respect for borders, some European tech companies fear implementation of the regulations will place them at a severe disadvantage. Benjamin Mueller, a senior policy analyst at the Center for Data Innovation, told Schechner and Olson, “It’s going to make it prohibitively expensive or even technologically infeasible to build AI in Europe. The U.S. and China are going to look on with amusement as the EU kneecaps its own startups.” Business reporter Sissi Cao (@sissicao) doesn’t believe America tech companies will be amused for long. She writes, “Like the EU’s General Data Protection Regulation (GDPR) enacted in 2018, the artificial intelligence regulation is expected to help set a template for the U.S. and governments around the world on regulating emerging technologies.”[4] China, however, won’t be one of those governments — and that is a big concern.

 

Margrethe Vestager, the European Commission’s executive vice president for the digital age doesn’t buy the arguments that the proposed regulations would kneecap European tech companies. When announcing the proposals, she stated, “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.” Nevertheless, some industry representatives hope the EU takes a deliberate approach to enacting any new regulations. Guido Lobrano, Vice President and Director General for the Information Technology Industry Council, stated, “We urge European policy-makers to focus on flexible regulation, targeted to the highest-risk applications. AI presents global opportunities and challenges, requiring cooperation and alignment between jurisdictions.”[5] Cao reports, “American tech giants with business in Europe are already gearing up to challenge the EU’s proposed law.”

 

Concluding Thoughts

 

Marcia and Desouza are more sanguine about the EU proposals than most. They write, “The Commission’s proposal represents a very important step towards the regulation of artificial intelligence.” They add, “In its framework, the European Commission adopts an innovation-friendly approach. A very interesting aspect is that the Commission supports innovation through so-called AI regulatory sandboxes for non-high-risk AI systems, which provide an environment that facilitates the development and testing of innovative AI systems.” Journalist Ashley Gold concludes, “Europe has generally moved faster than the U.S. in imposing new regulations on the tech industry, as it did with privacy and monopoly concerns. Once again, the EU has set the terms of debate on how to govern a new technology, and the U.S. will need to react.”[6] Peter van der Putten, director of decisioning solutions at software firm Pegasystems Inc, isn’t surprised regulators are trying to keep apace of technology. He told Loten, “AI is progressing at such a rapid pace right now, both the good and the bad of it.” He said tech vendors and consumers alike will benefit from clear rules and boundaries over its use and called the proposed EU regulations a “good first step.” He added, “In the long run, consumers will vote with their wallets.” However, when it comes to AI, often consumers don’t even know it’s being used.

 

Footnotes
[1] Valeria Marcia and Kevin C. Desouza, “The EU path towards regulation on artificial intelligence,” The Brookings Institution, 26 April 2021.
[2] Angus Loten, “Corporate Tech Leaders Are Mixed on EU Artificial Intelligence Bill,” The Wall Street Journal, 21 April 2021.
[3] Sam Schechner and Parmy Olson, “Artificial Intelligence, Facial Recognition Face Curbs in New EU Proposal,” The Wall Street Journal, 21 April 2021.
[4] Sissi Cao, “Will Europe’s Historic Artificial Intelligence Law Be a Template for United States?” Observer, 24 April 2021.
[5] Ashley Gold, “EU proposes new rules for artificial intelligence,” Axios, 21 April 2021.
[6] Ibid.