Home » Artificial Intelligence » Can Artificial Intelligence make Ethical Decisions?

Can Artificial Intelligence make Ethical Decisions?

June 18, 2018

supplu-chain

Why do we need to discuss ethics and artificial intelligence? That exact question was posed to the Venerable Tenzin Priyadarshi, Director of the Ethics Initiative at The Dalai Lama Center for Ethics and Transformative Values at MIT, during his talk at the First Annual Enterra Solutions® Cognitive Computing Summit held last June.[1] The simple answer to that question is that machines empowered by artificial intelligence (AI) are making decisions affecting peoples’ lives and, hence, they need a moral compass to help them make those decisions. Smita Sinha (@smita21sinha), a data science journalist, asks a few more pertinent questions: “Can AI make ethically-sound decisions? Is there a need for regulation? If so, what kind?”[2] Another journalist, Melony Rocque, keeps the questions coming: “Who is responsible if AI makes a mistake with potentially disastrous consequences? What about AI bias, inherited from its human creators or ‘learned’? Is AI secure and, of course, will AI reach the singularity and take over?”[3] When it comes to artificial intelligence, it seems the questions are endless.

 

AI and Ethics

 

At a conference held earlier this year, a computer scientist from Harvard University discussed an AI algorithm designed to tell police whether or not an individual was a gang member. Foteini Agrafioti (@fagrafioti), Chief Science Officer of the Royal Bank of Canada and the head of Borealis AI, reports, “When grilled over potential misuses, the Harvard University computer scientist who presented the work waved the question off, saying: ‘I’m just an engineer’.”[4] Agrafioti laments, “Statements like this from within the machine learning community reveal a divide over ethical responsibility. On one side are researchers who see their work as a function of pure scientific inquiry that should be allowed to advance without interference; on the other side are those who loudly demand that the scientists and companies building today’s AI technologies have an obligation to consider the broad and long-term impact of their work. Most fall somewhere along this spectrum.”

 

Oren Etzioni (@etzioni), CEO of Allen Institute for AI and a Professor at the University of Washington’s Allen School of Computer Science, reports some computer scientists have suggested the need for an AI Hippocratic Oath. “In the foreword to Microsoft’s recent book, The Future Computed,” Etzioni writes, “executives Brad Smith and Harry Shum proposed that Artificial Intelligence practitioners highlight their ethical commitments by taking an oath analogous to the Hippocratic Oath sworn by doctors for generations. In the past, much power and responsibility over life and death was concentrated in the hands of doctors. Now, this ethical burden is increasingly shared by the builders of AI software.”[5] He goes on to suggest the wording of the oath AI practitioners should use. Oaths and pledges might help remind AI researchers and engineers of the responsibilities they bear; but, words will do little to actually prevent the unethical use of AI. Agrafioti observes, “If someone builds a system, someone else will find a way to fleece that system.”

 

Tech companies are not blind to the situation. Sinha reports, “In 2016, tech giants like Google, Facebook, Amazon, IBM and Microsoft set up an industry-led non-profit consortium ‘Partnership on AI to Benefit People and Society’ to come up with ethical standards for researchers in AI in cooperation with academics and specialists in policy and ethics. And also to pacify public fears about the human-replacing technology. Later in 2017, other companies like Accenture and McKinsey too joined the alliance.” There is, however, a great deal of public cynicism when it comes to self-regulation by the big tech companies. Anthony Giddens (@AnthonyGiddens1), former director of the London School of Economics and a member of the House of Lords Select Committee on Artificial Intelligence, suggests one place to start is with a charter “that seeks to find a new balance between innovation and corporate responsibility.”[6] He likens the charter to the 13th century Magna Carta. The charter was drafted by members of the Select Committee on which he sits. Giddens reports the main elements of that charter insist AI should:

 

  • Be developed for the common good.
  • Operate on principles of intelligibility and fairness: users must be able to easily understand the terms under which their personal data will be used.
  • Respect rights to privacy.
  • Be grounded in far-reaching changes to education. Teaching needs reform to utilize digital resources, and students must learn not only digital skills but also how to develop a critical perspective online.
  • Never be given the autonomous power to hurt, destroy or deceive human beings.

 

He adds, “These principles form the basis of a cross-sector AI code that should be developed both nationally and internationally.” That statement highlights the fact that oaths and charters are only the beginning. Without some kind of regulation and enforcement, oaths and charters won’t prevent unethical use of AI in the future. Sinha explains, “Ethical AI is achievable provided that there is more human, government and companies intervention.” Rob High, Vice President and Chief Technology Officer for the IBM Watson, agrees. He writes, “The governance of AI and its utility to society rests on three constituents: providers of technology, consumers of technology and, to some extent, governments.”[7] He suggests ways each constituency can help in the effort:

 

  • “Providers of technology have a responsibility to build in such a way that encourages positive use and discourages abuse. We need to do this while measuring ourselves along the way so we can ensure what we’re doing creates a precedence of positive, ethical use. And the technology created should be transparent, conveying confidence in its findings.”
  • “Technology consumers have the responsibility to demand products that create a beneficial effect. We, as consumers, must reject technologies that are destructive. Everyone can blame major phone manufacturers for creating and nurturing our dependence on our smartphones, but we also have the responsibility to put our phones down and demand advancements that make our phones safer, such as settings that automatically disable communication apps while driving.”
  • “To some extent, governments have a role in the governance of AI through regulatory practices. This certainly doesn’t mean that one country’s government can regulate what’s happening in another, but each country can impose a degree of influence on how they treat these new technologies.”

 

Regulation and enforcement are important because someone needs to be responsible to ensure good intentions are met. As the old saying goes, “When everyone is responsible, no one is responsible.”

 

Summary

 

Agrafioti concludes, “Fairness can’t be an afterthought; its impacts must be understood and respected by industry, the place where AI meets the market and touches human lives. Safeguards should be built right into the core of our systems in order to ensure that machine learning will not introduce bias and that it will result in explainable and justifiable actions.” Giddens adds, “The advantages of the digital revolution have been huge and have reshaped our lives, in many respects for the better. As in previous technological revolutions, societies must find a way to reap the benefits of innovation while containing the problems and hazards. A charter that protects the rights and liberties of citizens — a Magna Carta for the digital age — is the place to start.” Governments need to turn good intentions (e.g., oaths and charters) into regulations with teeth. AI is increasingly going to affect our lives and its ethical use will be critical.

 

Footnotes
[1] Stephen DeAngelis, “Artificial Intelligence and Ethics,” Enterra Insights, 28 July 2017.
[2] Smita Sinha, “Is Ethical AI A Myth Or Can It Be Achieved?Analytics India Magazine, 23 April 2018.
[3] Melony Rocque, “AI’s ethical reckoning, ready or not,” SmartCitiesWorld, 21 March 2018.
[4] Foteini Agrafioti, “Ensuring that artificial intelligence is ethical? That’s everyone’s responsibility,” Maclean’s, 10 March 2018.
[5] Oren Etzioni, “A Hippocratic Oath for artificial intelligence practitioners,” TechCrunch, 14 March 2018.
[6] Anthony Giddens, “A Magna Carta for the digital age,” The Washington Post, 2 May 2018.
[7] Rob High, “The ethics of artificial intelligence: On professional, personal, and global responsibility,” Mobile Business Insights, 13 February 2018.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!