Home » Artificial Intelligence » Ethics is Now a Hot Topic in Artificial Intelligence

Ethics is Now a Hot Topic in Artificial Intelligence

April 5, 2019

supplu-chain

A couple of years ago at the First Annual Enterra Solutions® Cognitive Computing Summit, I asked the Venerable Tenzin Priyadarshi, Director of the Ethics Initiative at The Dalai Lama Center for Ethics and Transformative Values at MIT, to address the topic of ethics as it relates to artificial intelligence (AI). Some skeptics in the audience openly questioned the need for ethics in the field of AI. Priyadarshi did an excellent job making his case. He stated the Dalai Lama Center views ethics as “an optimization of human behavior” rather than a constraint.[1] It follows that ethics in artificial intelligence systems would also act as an optimizer. Since those early discussions about ethics and AI, the topic has heated up. Scott Rosenberg (@scottros) writes, “There’s no lack of reports on the ethics of artificial intelligence. But most of them are lightweight — full of platitudes about ‘public-private partnerships’ and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. [A] report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results.”[2]The AI Now report concludes efforts to infuse AI systems with ethics have flopped. Should we care? Rosenberg notes, “AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life.” Yes, we should care.

 

The case for ethics in AI

 

“The growing shift away from ethics and empathy in the creation of our digital future,” writes, Kalev Leetaru (@kalevleetaru), a Senior Fellow at the George Washington University Center for Cyber & Homeland Security, “is both profoundly frightening for the Orwellian world it is ushering in, but also a sad commentary on the academic world that trains the data scientists and programmers [who] are shifting the online world away from privacy.”[3] The ready availability of so much data, Leetaru asserts, has resulted in “a new generation of programmers and data scientists who view research ethics as merely an outdated obsolete historical relic that was an obnoxious barrier preventing them from doing as they pleased to an unsuspecting public.”

 

Chris Middleton (@strategistmag) agrees there are too many programmers who see nothing wrong getting involved in “because-we-can” projects.[4] “In such a febrile environment,” Middleton writes, “the risk is that the twin problems of confirmation bias in research and human prejudice in society become an automated pandemic: systems that are designed to tell people exactly what they want to hear; or software that perpetuates profound social problems.” Middleton’s and Leetaru’s views accurately reflect the public’s growing concern about how personal data is being used to support nefarious activities. Leetaru insists ethics needs to be a part of every program teaching computing programming skills. “In the end,” he writes, “the only thing standing between a safe and privacy-first web and an Orwellian online dystopia is the empathy and ethics of those creating our digital world. Perhaps if our technical curriculums prioritized those two concepts with the same intensity they emphasize technical understanding, our digital world might evolve into something far less sinister.”

 

“The problem for the many socially and ethically conscious academics working in the field,” Middleton asserts, “is that business often leaps before it looks, or thinks. A recent global study by consultancy Avanade found that 70% of the C-level executives it questioned admitted to having given little thought to the ethical dimensions of smart technologies.” That situation may be changing. “The call for artificial intelligence ethics specialists,” writes John Murawski, “is growing louder as technology leaders publicly acknowledge that their products may be flawed and harmful to employment, privacy and human rights.”[5] He reports several large tech companies have already hired ethics specialists to help guide their AI efforts. “While the position is still rare and being defined,” Murawski writes, “it’s likely to become much more common. … A Brookings Institution report in September advised businesses to hire AI ethicists and create an AI review board, develop AI audit trails and implement AI training programs.

 

Tellingly, the Brookings report also advised organizations to create a remediation plan for the eventuality that their AI technology does inflict social harm.” Cade Metz (@CadeMetz) believes corporations have been motivated primarily by activist protests. He explains, “As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.”[6]

 

Is ethical AI even possible?

 

Although it’s encouraging that some large tech companies are beginning to think about ethical dilemmas created by their products, the world is full of people who deliberately set out to deceive, stoke biases, and leverage other unethical practices. As a result, Metz openly wonders: Is ethical AI even possible? He explains, “Tensions continue to rise as some question whether [corporate] promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.” Even government regulation, however, won’t stop bad actors from doing unethical things. Metz explains, “Policymakers call this a ‘dual-use technology.’ It has everyday commercial applications, like identifying designer handbags on a retail website, as well as military applications, like identifying targets for drones. … Rapidly advancing forms of artificial intelligence can improve transportation, health care and scientific research. Or they can feed mass surveillance, online phishing attacks and the spread of false news.” Metz concludes, “All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.”

 

Concluding thoughts

 

Like many technologies, artificial intelligence is neither inherently good nor inherently bad. Kate Crawford (@katecrawford), co-founder of AI Now, explained to Rosenberg, “It’s not a question of just tweaking the numbers to try and remove systemic inequalities and biases.” And even though there will continue to be bad people willing to do unethical things, Crawford believes we still need to do our best to make AI systems ethical. She explains, “At the very least we should expect a deep understanding of how these systems can be made fairer, and of how important these decisions are to people’s lives. I don’t think it’s too big an ask. And I think the most responsible producers of these systems really do want them to work well. This is a question of starting to back those good intentions with strong research and strong safety thresholds. It’s not beyond our capacity. If AI is going to be moving at this rapid pace into our core social institutions, I see it as absolutely essential.” I wholeheartedly agree.

 

Footnotes
[1] Stephen DeAngelis, “Artificial Intelligence and Ethics,” Enterra Insights, 28 July 2017.
[2] Scott Rosenberg, “Why AI is Still Waiting for Its Ethics Transplant,” Wired, 1 November 2017.
[3] Kalev Leetaru, “Do We Need To Teach Ethics And Empathy To Data Scientists?Forbes, 8 October 2018.
[4] Chris Middleton, “Navigating the AI ethical minefield without getting blown up,” Diginomica, 5 July 2017.
[5] John Murawski, “Need for AI Ethicists Becomes Clearer as Companies Admit Tech’s Flaws,” The Wall Street Journal, 1 March 2019.
[6] Cade Metz, “Is Ethical A.I. Even Possible?The New York Times, 1 March 2019.

Related Posts: