Home » Artificial Intelligence » Artificial Intelligence and Ethical Choices

Artificial Intelligence and Ethical Choices

December 19, 2016

supplu-chain

“How can we be sure,” asks Alex Gray, “that artificially intelligent robots will make ethical choices?”[1] That’s an excellent question and one with which a growing number of researchers and pundits wrestle. Olivia Goldhill (@OliviaGoldhill) asks a similar question. “As robots become more advanced,” she writes, “their ethical decision-making will only become more sophisticated. But this raises the question of how to program ethics into robots, and whether we can trust machines with moral decisions.”[2] Of course, what’s really being programmed is the artificial intelligence (AI) that enlivens smart machines — be they robots or cognitive computing systems. Mike Loukides (@mikeloukides), Vice President of Content Strategy for O’Reilly Media, adds, “We do need to ensure that AI works for us rather than against us; we need to think ethically about the systems that we’re building.”[3]

 

Artificial Intelligence Gone Bad

 

Lest you think the discussion about ethics and artificial intelligence is merely an academic exercise, Gray provides five recent examples of AI gone awry. The first example involves Microsoft’s chatbot named Tay. Gray notes, “[Tay] was meant to be a friendly chatbot that would sound like a teenage girl and engage in light conversation with her followers on Twitter. However, within 24 hours she had been taken off the site because of her racist, sexist and anti-Semitic comments.” The second example discussed by Gray is a hypothetical case involving autonomous vehicles. “How can self-driving cars be programmed to make an ethical choice when it comes to an unavoidable collision,” Gray asks. “Humans would seriously struggle when deciding whether to slam into a wall and kill all passengers, or hitting pedestrians to save those passengers. So how can we expect a robot to make that split-second decision?” She notes that the MIT Media Lab is studying that vary topic (watch the short video).

 

 

The third example discussed by Gray involves bias. “Less physically harmful, but just as worrying,” she writes, “are robots that learn racist behaviour. When robots were asked to judge a beauty competition, they overwhelmingly chose white winners. That’s despite the fact that, while the majority of contestants were white, many people of colour submitted photos to the competition, including large numbers from India and Africa.” I’ll discuss ethics and big data in a future article. Gray’s fourth example (and one I will discuss further below) involves errant machine learning. Gray writes, “In a similar case, image tagging software developed by Google and Flickr suffered many disturbing mishaps, such as labelling a pair of black people gorillas and calling a concentration camp a ‘jungle gym’. Google said sorry and admitted it was a work in progress: ‘Lots of work being done and lots is still to be done, but we’re very much on it.'” Gray’s final example involves a cleaning robot. “One paper recently looked at how artificial intelligence can go wrong in unexpected ways,” she explains. “For instance, what happens if a robot, whose job it is to clean up mess, decides to knock over a vase, rather than going round it, because it can clean faster by doing so?”

 

What Can Be Done

 

Nick Bostrom, a philosophy professor at Oxford University, and Eliezer Yudkowsky, a Research Fellow at the Singularity Institute for Artificial Intelligence, note, “The prospect of AIs with superhuman intelligence and superhuman abilities presents us with the extraordinary challenge of stating an algorithm that outputs superethical behavior.”[4] Most pundits, including Gray, lay the blame for missteps at the feet of the people who develop algorithms. “When things do go wrong,” Gray writes, “one explanation is the fact that algorithms, the computer coding that powers the decision-making, is written by humans, and is therefore subject to all the inherent biases that we have.” Robert Walker, an inventor and programmer, adds, “[A computer] doesn’t understand anything. All it can do is follow instructions. … Our programs so far are good at many things, and far better at us at quite a few things, but I think fair to say, that they don’t really ‘understand’ anything in the way that humans understand them.”[5] In other words, Walker believes the human who creates and runs the artificial program needs to be ethical, not the machine. That’s a problem, because we know all humans are not ethical — even smart ones. So what can be done?

 

My first suggestion is to ensure an ontology complements machine learning. In the case of mislabeled photos mentioned above, an ontology would have known that people are not gorillas and a prison is not a playground. In other words, an ontology can help ensure some stupid mistakes are not made. It’s not ethics; it’s common sense. The second thing that can be done is to ensure data sets are up to the task. Gray notes an algorithm can only work with the data it’s got. In the case of the beauty contest, the AI system “had more white faces to look at than any other and based its results on that.” Goldhill offers some further suggestions. She writes:

“Broadly speaking, there are two main approaches to creating an ethical robot. The first is to decide on a specific ethical law (maximize happiness, for example), write a code for such a law, and create a robot that strictly follows the code. But the difficulty here is deciding on the appropriate ethical rule. Every moral law, even the seemingly simple one above, has a myriad of exceptions and counter examples. For example, should a robot maximize happiness by harvesting the organs from one man to save five? ‘The issues of morality in general are very vague,’ says Ronald Arkin, professor and director of the mobile robot laboratory at Georgia Institute of Technology. ‘We still argue as human beings about the correct moral framework we should use, whether it’s a consequentialist utilitarian means-justify-the-ends approach, or a Kantian deontological rights-based approach.’ And this isn’t simply a matter of arguing until we figure out the right answer. Patrick Lin, director of Ethics + Emerging Sciences Group at California Polytechnic State University, says ethics may not be internally consistent, which would make it impossible to reduce to programs. ‘The whole system may crash when it encounters paradoxes or unresolvable conflicts,’ he says. The second option is to create a machine-learning robot and teach it how to respond to various situations so as to arrive at an ethical outcome. This is similar to how humans learn morality, though it raises the question of whether humans are, in fact, the best moral teachers.”

The problem with the second approach (i.e., looking for ethical outcomes) is that some outcomes may not be clear cut. Loukides observes, “Ethics is about having an intelligent discussion, not about answers, as such — it’s about having the tools to think carefully about real-world actions and their effects, not about prescribing what to do in any situation. Discussion leads to values that inform decision-making and action.” What that tells me is that artificial intelligence development should always be a group effort that provides manifold perspectives and intense discussion.

 

Summary

 

Artificial intelligence will be little better than humans when faced with paradoxes or unresolvable situations. That doesn’t mean we shouldn’t put a lot of effort into ensuring our smart systems act as ethically as possible. Fortunately, there are a number of academic institutions and research teams focusing on the challenge of ethics and artificial intelligence. The international legal firm K&L Gates LLP recently donated $10 million to Carnegie Mellon University to study the ethical and policy issues surrounding artificial intelligence and other computing technologies.[6] A research team at Georgia Tech is working on trying to make machines comply with international humanitarian law.[7] And I’m sure there are other ongoing efforts of which I’m unaware. Gray notes, “While researchers continue to look at ways to make artificial intelligence as safe as it can be, they are also working on a kill switch, so that if the worst-case scenario, a human can take over.” The bottom line is: Ethics can be programmed into artificial intelligence systems, but some situations will always require actions which involve the least injurious outcome of bad choices.

 

Footnotes
[1] Alex Gray, “Can we trust robots to make ethical decisions?” World Economic Forum, 11 November 2016.
[2] Olivia Goldhill, “Can we trust robots to make moral decisions?Quartz, 3 April 2016.
[3] Mike Loukides, “The ethics of artificial intelligence,” O’Reilly Media, 14 November 2016.
[4] Nick Bostrom and Eliezer Yudkowsky, “The Ethics of Artificial Intelligence, Machine Intelligence Research Institute.
[5] Robert Walker, “Why Computer Programs Can’t Understand Truth – And Ethics Of Artificial Intelligence Babies,” Science 2.0, 9 November 2015.
[6] Press Release, “Carnegie Mellon Receives $10 Million from K&L Gates to Study Ethical Issues Posed by Artificial Intelligence,” Carnegie Mellon University, 2 November 2016.
[7] Goldhill, op. cit.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!