Home » Artificial Intelligence » Artificial Intelligence and Ethics

Artificial Intelligence and Ethics

July 28, 2017

supplu-chain

You hear and read a lot of scary stuff about artificial intelligence (AI). That’s why I thought it was important to address the ethical side of AI at the First Annual Enterra Solutions® Cognitive Computing Summit. The person I selected to speak on that subject was the Venerable Tenzin Priyadarshi, Director of the Ethics Initiative at The Dalai Lama Center for Ethics and Transformative Values at MIT. As a member of the Center’s Board of Directors, I am keenly aware of his qualifications to address the subject. By way of introduction, The Center is dedicated to inquiry, dialogue, and education on the ethical and humane dimensions of life. As a collaborative and nonpartisan think tank, The Center focuses on the development of interdisciplinary research and programs in varied fields of knowledge, from science and technology to education and international relations. Its programs emphasize responsibility and examine meaningfulness and moral purpose between individuals, organizations, and societies.

 

Priyadarshi began his Summit discussion by noting that ethics seems to have vanished from the curricula of many colleges. He admitted it was a difficult subject to teach well. Business schools that tried to introduce courses on ethics focused on case studies, such as the Enron scandal, with the counterproductive — if pragmatic — result that students came away with the lesson to avoid becoming a whistleblower rather than to behave with integrity. He stated that The Center views ethics as “an optimization of human behavior” rather than a constraint. It follows that ethics in artificial intelligence systems would also act as an optimizer. There were, however, some skeptics in the audience. Priyadarshi explained that AI is likely to play a significant future role in areas such as health care and education where having an ethical dimension seems very appropriate. He went on to note that humanity must address the question about what will happen if (when?) AI becomes the dominant force and how to include ethics in the development of AI so that the post-singularity AI we get is the AI we want.

 

While some Summit participants questioned the need for ethics in AI, many analysts point to the automotive sector as a good example of why ethics are important. The case in point is driverless vehicles. Priyadarshi asked, “What should the AI governing an autonomous car do if faced with either running over three pregnant women and their three toddlers or self-destructing, killing the car’s owner? The ethical answer is to self-destruct; but, members of the public are unlikely to purchase a car with such programming, knowing that it could make them victims. To avoid forcing members of the car-buying public to choose, manufacturers are selling cars that are assistive, not autonomous.” The time is rapidly approaching, however, when autonomous vehicles will make choices. In an article about AI and ethics, Richard Waters (@RichardWaters) asserts some of the choices we now consider ethical are really social. He explains, “Robotics experts say that many of the challenges are not so much ethical as technological and social: better design and changing social norms could resolve some of the perceived problems and narrow the range of truly moral conundrums that robots give rise to.”[1] Nevertheless, Priyadarshi insists, “The opportunity exists to think deeply about the ethical basis that should be part of AI and instill it at this crucial juncture when AI is beginning to spread its tendrils throughout society.” Gillian Christie (@gchristie34) and Derek Yach (@swimdaily) agree with that assessment.[2] They write:

“A Fourth Industrial Revolution is arising that will pose tough ethical questions with few simple, black-and-white answers. Smaller, more powerful and cheaper sensors; cognitive computing advancements in artificial intelligence, robotics, predictive analytics and machine learning; nano, neuro and biotechnology; the Internet of Things; 3D printing; and much more, are already demanding real answers really fast. And this will only get harder and more complex when we embed these technologies into our bodies and brains to enhance our physical and cognitive functioning.”

Fortunately, Priyadarshi is not alone in his quest to involve ethics in the development of artificial intelligence systems. Michael Irving reports that this past January the Future of Life Institute hosted the Beneficial Artificial Intelligence (BAI) 2017 conference. “[The Institute] gathered AI researchers from universities and companies to discuss the future of artificial intelligence and how it should be regulated.”[3] He notes that one of the unique things the Institute did was to quiz participants ahead of the conference about “how they thought AI development needed to be prioritized and managed in the coming years, and used those responses to create a list of potential points. The revised version was studied at the conference, and only when 90 percent of the scientists agreed on a point would it be included in the final list.” The final list included 23 points. According to Irving, the list “reads like an extended version of Isaac Asimov’s famous Three Laws of Robotics. The 23 points are grouped into three areas: Research Issues, Ethics and Values, and Longer-Term Issues.” Here is the list published by the Future of Life Institute:

 

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

 

Ethics and Values

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14) Shared Benefit: AI technologies should benefit and empower as many people as possible.
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

Point 15, Shared Prosperity, was a topic Priyadarshi touched on in his remarks at Cognitive Computing Summit. He noted, “In today’s society, people have been programmed to equate self-identity with income-producing work, which threatens to usher in a wave of dissociation from sense of self as AI pushes people out of their professional existence.” He insisted it is not too early to discuss how society is going to function in a world characterized by automation. Mike Loukides (@mikeloukides), Vice President of Content Strategy for O’Reilly Media, agrees. “Even though we’re still in the earliest days of AI,” he writes, “we’re already seeing important issues rise to the surface: issues about the kinds of people we want to be, and the kind of future we want to build.”[4]

 

Footnotes
[1] Richard Waters, “Why it is hard to teach robots to choose wisely,” Financial Times, 20 January 2016.
[2] Gillian Christie and Derek Yach, “Consider ethics when designing new technologies,” TechCrunch, 31 December 2016.
[3] Michael Irving, “Move over Asimov: 23 principles to make AI safe and ethical,” New Atlas, 2 February 2017.
[4] Mike Loukides, “The ethics of artificial intelligence,” O’Reilly, 14 November 2016.

Related Posts: