Home » Artificial Intelligence » Why Humans (Not Artificial Intelligence Systems) need to be Ethical

Why Humans (Not Artificial Intelligence Systems) need to be Ethical

August 4, 2015

supplu-chain

Anyone following recent discussions about artificial intelligence (AI) knows that there has been a lot of chatter over the past year about whether humans are going to fall prey to artificial intelligence overlords. People as famous as Professor Stephen Hawking and Tesla’s Elon Musk have weighed in on the subject. An article from Taylor & Francis poses the question: Is it time we started thinking about programming ethics into our artificial intelligence systems.[1] The article states:

“From performing surgery and flying planes to babysitting kids and driving cars, today’s robots can do it all. With chatbots such as Eugene Goostman recently being hailed as ‘passing’ the Turing test, it appears robots are becoming increasingly adept at posing as humans. While machines are becoming ever more integrated into human lives, the need to imbue them with a sense of morality becomes increasingly urgent. But can we really teach robots how to be good?”

The article’s authors wonder if chatbots (i.e., computer programs pretending to be humans) aren’t themselves a form of evil since they are purposely trying to deceive people into thinking they are interacting with another person. Robert Walker, an inventor and programmer, thinks that such talk is gibberish. He argues that computers will never be able to duplicate the human brain and will, therefore, never be human enough to know truth or act ethically in the same way as humans.[2] Walker references the work of a number of well-known scientists, including Professor Roger Penrose, a quantum physicist at Oxford University. Walker writes:

“If [Penrose] is right, then it is never going to be possible to create ‘digital humans’ as computer programs. But it’s much more general than that. If he is right, it is not just impossible to completely simulate a human down to every atom in a computer — you can’t even have a computer program that can truly understand addition and multiplication in the way a human can. … The computer program that beats you at chess has no idea what chess is, or what a chess piece is. It couldn’t discuss the match with you, or recognize a chess game in a photograph. It doesn’t understand anything. All it can do is follow instructions. … Our programs so far are good at many things, and far better at us at quite a few things, but I think fair to say, that they don’t really ‘understand’ anything in the way that humans understand them. If Roger Penrose is right, then no programmable computer can ever understand mathematical truth. If so — perhaps arguably they can never really understand anything at all, just follow the instructions programmed into them. … Seems unlikely to me, that, e.g., a program designed to drive cars, or play chess or whatever, would somehow suddenly ‘wake up’ and understand the world like a human being. Or even one designed as a generalist, say as a companion for humans or a personal robot ‘butler’ as in some of Asimov’s stories, or the like. Because of Penrose’s and Gödel’s arguments. They will be programmed machines, and will have whatever capabilities we build into them — and if the designer has any sense, then as the robots get more powerful then you build in safeguards also so that it is easy to stop them instantly if they cause any problems through some problem with the way they interact with the world.”

In other words, Walker believes that it the human who creates and runs the artificial program that needs to be ethical, not the machine. Not everyone, however, buys into Walker’s arguments. Susan Fourtané (@SusanFourtane) reports that some scientists believe that “autonomous, morally competent robots” can be built and that they would be “better moral creatures than we are.”[3] She writes:

“In 2002, roboticist Gianmarco Veruggio coined the term roboethics — the human ethics of robots’ designers, manufacturers, and users — to outline where research should be focused. At that time, the ethics of artificial intelligence was divided into two subfields.

  • Machine ethics: This branch deals with the behavior of artificial moral agents.
  • Roboethics: This branch responds to questions about the behavior of humans — how they design, construct, use, and treat robots and other artificially intelligent beings. Roboethics ponders the possibility of programming robots with a code of ethics that could respond appropriately according to social norms that differentiate between right and wrong.

Naturally, to be able to create such morally autonomous robots, researchers have to agree on some fundamental pillars: what moral competence is and what humans would expect from robots working side by side with them, sharing decision making in areas like healthcare and warfare. At the same time, another question arises: What is the human responsibility of creating artificial intelligence with moral autonomy? And the leading research question: What would we expect of morally competent robots?”

The fact that Fourtané writes about having to program robots with a code of ethics supports Walker’s arguments that it is the human programmer, rather than the machine, who must be ethical in the first place. She continues:

“Professors Bertram F. Malle of Brown University and Matthias Scheutz of Tufts University published a research paper … titled ‘Moral Competence in Social Robots.’ They argue that moral competence consists of four broad components.

1. Moral core: ‘A system of norms and the language and concepts to communicate about these norms,’ including moral concepts and language and a network of moral norms
2. Moral action: ‘Moral cognition and affect’ — conforming one’s own actions to the norms
3. Moral cognition and emotion: ‘Moral decision making and action,’ the emotional response to norm violations, and moral judgement
4. Moral communication: Explaining, justifying, negotiating, and reconciling norm violations

Designing autonomous, morally competent robots may be inspiring and fascinating, but it certainly will not be easy.”

Nothing in the arguments presented by Fourtané convinces me that machines are going to be able to develop an autonomous moral core beyond what is programmed into them. That’s not to say that they won’t be able to resolve contradicting rules. Even in today’s business environment, cognitive computing systems, programmed with business rules that allow them to make autonomous decisions, deal successfully with conflicting rules in order to ensure effective and efficient actions are taken. This doesn’t mean that such systems understand the moral implications of the decisions they make or the actions they take. In a follow-on article, Fourtané interviews Matthias Scheutz, PhD, professor of computer science at Tufts School of Engineering and director of the Human-Robot Interaction Laboratory (HRI Lab) at the university. Scheutz all but admits that it is humans, not machines, who must be ethical. Scheutz told Fourtané, “As robot designers, we are responsible for developing the control architecture with all its algorithms that allows for moral reasoning, but we do not make any commitments to the ethical principles with which the robot will be endowed. This is the job of those deploying the robot, i.e., to decide what ethical principles, rules, norms, etc. to include. As a result, the question of responsibility of robot behavior will be a critical one for legal experts to determine ahead of time, especially in the light of instructible robots that can also acquire new knowledge during task performance.”[4]

 

I think it is fair to say that, for the time being, ethics remains something with which humans, not machines, must deal. We can program our ethical rules into machines and can prioritize those rules to help machines decide between conflicting rules, but the machines that carry out their programming will not be making decisions based on some inherent moral core. That means that people should be a lot more concerned about the ethics of the people creating the algorithms than the machines that carry those instructions out. In business, we see both ethically and morally corrupt executives; unfortunately, we haven’t figured out to ensure that only ethical leaders rise to the top. One of the benefits of having machines make routine decisions (if they are programmed correctly) is that the right choice will be made without having to consider an ethical dilemma in the mix.

 

Footnotes
[1] Taylor & Francis, “How to train your robot: Can we teach robots right from wrong?Nanowerk, 14 October 2015.
[2] Robert Walker, “Why Computer Programs Can’t Understand Truth – And Ethics Of Artificial Intelligence Babies,” Science 2.0, 9 November 2015.
[3] Susan Fourtané, “Ethical, Autonomous Robots of the Near Future,” EE Times, 14 July 2015.
[4] Susan Fourtané, “Engineering Ethics Into A Robot,” EE Times, 16 July 2015.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!