Home » Artificial Intelligence » Artificial Intelligence and Moral Dilemmas

Artificial Intelligence and Moral Dilemmas

June 29, 2012

supplu-chain

Don Brandes writes, “It is estimated that by 2020 a $1,000 dollar computer will have the processing power to match the human brain. By 2030 the average personal computer will have the processing power of a thousand human brains.” [“Moral Dilemmas of Artificial Intelligence“] This potential processing power raises the question about whether computers will ever achieve sentience. Richard Barry states, “It is an enormous question that touches religion, politics and law, but little consideration is given to [the] dawn of a new intelligent species and to the rights an autonomous sentient being could [be entitled to]. For a start, it would have to convince us that it was truly sentient: intelligent and able to feel (although it is debateable whether its feelings would mirror our own).” [“Sentience: The next moral dilemma,” ZD Net UK, 24 January 2001]

 

Not everyone believes that computers will become sentient. In an earlier post about the history of artificial intelligence, I cited a bit.tech article which noted that Professor Noel Sharkey believes “the greatest danger posed by AI is its lack of sentience rather than the presence of it. As warfare, policing and healthcare become increasingly automated and computer-powered, their lack of emotion and empathy could create significant problems.” [“The story of artificial intelligence,” 19 March 2012] The point is, whether computers achieve sentience or not, moral dilemmas are going to arise concerning how we apply artificial intelligence in the years ahead.

 

A recent article in The Economist asserts, “As robots grow more autonomous, society needs to develop rules to manage them.” [“Morals and the machine,” 2 June 2012] How to give “thinking machines” a moral grounding has been a matter of concern from the conception of artificial intelligence. The article begins with a well-known AI computer — HAL:

“In the classic science-fiction film ‘2001’, the ship’s computer, HAL, faces a dilemma. His instructions require him both to fulfil the ship’s mission (investigating an artefact near Jupiter) and to keep the mission’s true purpose secret from the ship’s crew. To resolve the contradiction, he tries to kill the crew. As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was.”

Armed drones currently ply the skies over Afghanistan and have been used to attack Taliban and Al Qaeda leaders. Although those drones are not completely autonomous, they certainly could be. As the article notes, “Military technology, unsurprisingly, is at the forefront of the march towards self-determining machines.” It continues:

“Its evolution is producing an extraordinary variety of species. The Sand Flea can leap through a window or onto a roof, filming all the while. It then rolls along on wheels until it needs to jump again. RiSE, a six-legged robo-cockroach, can climb walls. LS3, a dog-like robot, trots behind a human over rough terrain, carrying up to 180kg of supplies. SUGV, a briefcase-sized robot, can identify a man in a crowd and follow him. There is a flying surveillance drone the weight of a wedding ring, and one that carries 2.7 tonnes of bombs.”

The bit.tech article cited earlier stated:

“If the idea of software-powered killing machines isn’t nightmarish enough, then some of science’s darker predictions for the future of AI certainly are. As far back as the 1960s, when AI research was still in its earliest stages, scientist Irving Good posited the idea that, if a sufficiently advanced form of artificial intelligence were created, it could continue to improve itself in what he termed an ‘intelligence explosion’. While Good’s supposition that an ‘ultraintelligent’ machine would be invented in the 20th century was wide of the mark, his theory exposed an exciting and potentially worrying possibility: that a superior artificial intellect could render human intelligence obsolete.”

Even if AI computers don’t become sentient or “render human intelligence obsolete,” they will eventually be required to make decisions that have a moral or ethical component — and not just those used in military systems. The article in The Economist continues:

“Robots are spreading in the civilian world, too, from the flight deck to the operating theatre. Passenger aircraft have long been able to land themselves. Driverless trains are commonplace. Volvo’s new V40 hatchback essentially drives itself in heavy traffic. It can brake when it senses an imminent collision, as can Ford’s B-Max minivan. Fully self-driving vehicles are being tested around the world. Google’s driverless cars have clocked up more than 250,000 miles in America, and Nevada has become the first state to regulate such trials on public roads. In Barcelona, … Volvo demonstrated a platoon of autonomous cars on a motorway. As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency.”

As I noted above, “weapons systems currently have human operators ‘in the loop’, but … it will be possible to shift to … machines carrying out orders autonomously.” The article continues:

“As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of ‘machine ethics’, which aims to give machines the ability to make such choices appropriately—in other words, to tell right from wrong.”

The article notes that “one way of dealing with these difficult questions is to avoid them altogether, by banning autonomous battlefield robots and requiring cars to have the full attention of a human driver at all times.” But the genie is already out of the bottle and research in artificial intelligence continues to make breakthroughs. Autonomous systems are inevitable. Authors of the article in The Economist believe that “autonomous robots could do much more good than harm.” The article explains why:

“Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat. Driverless cars are very likely to be safer than ordinary vehicles, as autopilots have made planes safer. Sebastian Thrun, a pioneer in the field, reckons driverless cars could save 1m lives a year. Instead, society needs to develop ways of dealing with the ethics of robotics—and get going fast. In America states have been scrambling to pass laws covering driverless cars, which have been operating in a legal grey area as the technology runs ahead of legislation. It is clear that rules of the road are required in this difficult area, and not just for robots with wheels.”

The article notes that “the best-known set of guidelines for robo-ethics are the ‘three laws of robotics’ coined by Isaac Asimov, a science-fiction writer, in 1942.” Those laws were aimed at ensuring that robots would never harm humans. “Unfortunately,” the article states, “the laws are of little use in the real world. … Regulating the development and use of autonomous robots will require a rather more elaborate framework. Progress is needed in three areas in particular.” The first area is the legal arena. The article explains:

“First, laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident. In order to allocate responsibility, autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary. This has implications for system design: it may, for instance, rule out the use of artificial neural networks, decision-making systems that learn from example rather than obeying predefined rules.”

Since the most promising systems are “learning systems,” I suspect that a ban on such systems is unlikely. The second area that needs more attention, the article says, is determining what is ethical. It explains:

“Second, where ethical systems are embedded into robots, the judgments they make need to be ones that seem right to most people. The techniques of experimental philosophy, which studies how people respond to ethical dilemmas, should be able to help.”

The last area that the article says requires attention is cross-discipline collaboration. It explains:

“Last, and most important, more collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices. Both ethicists and engineers stand to benefit from working together: ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to reassure society that they are not taking any ethical short-cuts.”

Several years ago, Science Daily reported that researchers from Portugal and Indonesia are working on “an approach to decision making based on computational … which might one day give machines a sense of morality.” [“Moral Machines? New Approach To Decision Making Based On Computational Logic,” 25 August 2009] The article continues:

“They have turned to a system known as prospective logic to help them begin the process of programming morality into a computer. Put simply, prospective logic can model a moral dilemma and then determine the logical outcomes of the possible decisions. The approach could herald the emergence of machine ethics. … The team has developed their program to help solve the so-called ‘trolley problem’. This is an ethical thought experiment first introduced by British philosopher Philippa Foot in the 1960s. The problem involves a trolley running out of control down a track. Five people are tied to the track in its path. Fortunately, you can flip a switch, which will send the trolley down a different track to safety. But, there is a single person tied to that track. Should you flip the switch? The prospective logic program can consider each possible outcome based on different versions of the trolley problem and demonstrate logically, what the consequences of the decisions made in each might be. The next step would be to endow each outcome with a moral weight, so that the prototype might be further developed to make the best judgement as to whether to flip the switch.”

As the trolley problem points out, some situations don’t have a happy ending even when an ethical decision is made. That’s why they are called moral dilemmas. The article in The Economist concludes, “Technology has driven mankind’s progress, but each new advance has posed troubling new questions. Autonomous machines are no different. The sooner the questions of moral agency they raise are answered, the easier it will be for mankind to enjoy the benefits that they will undoubtedly bring.” Fortunately, many of the AI applications used in business don’t face the kind of moral dilemmas being discussed here. Nevertheless no discussion of AI would be complete if ethics and morals were not included.

Related Posts: