Home » Artificial Intelligence » Artificial Intelligence: The Sad Tale of Tay

Artificial Intelligence: The Sad Tale of Tay

March 29, 2016

supplu-chain

“Tay was born pure,” writes Anthony Lydgate (@anthonylydgate). “She loved E.D.M., in particular the work of Calvin Harris. She used words like ‘swagulated’ and almost never didn’t call it ‘the internets.’ She was obsessed with abbrevs and the prayer-hands emoji. She politely withdrew from conversations about Zionism, Black Lives Matter, Gamergate, and 9/11, and she gave out the number of the National Suicide Prevention Hotline to friends who sounded depressed. She never spoke of sexting, only of ‘consensual dirty texting.’ She thought that the wind sounded Scottish, and her favorite Pokémon was a sparrow. In short, Tay — the Twitter chat bot that Microsoft launched on [23 March 2016] — resembled her target cohort, the millennials, about as much as an artificial intelligence could, until she became a racist, sexist, trutherist, genocidal maniac. On [24 March], after barely a day of consciousness, she was put to sleep by her creators.”[1] Microsoft released the following apology for this unfortunate turn of events:

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

In other words, the terrible things that Tay tweeted were a reflection of the worst of mankind not the worst of artificial intelligence. Alexander Eule writes, “First Google’s self-driving car crashed, then Microsoft’s Twitter bot started spewing inappropriate tweets. That’s what happens when machines learn from the world.”[2] He continues, “In a Frankensteinian fit, the bot wrote 96,000 tweets in 16 hours, some denying the Holocaust, others spreading the mantras of white supremacists. Tech site Ars Technica headlined its story: ‘Microsoft terminates its Tay AI chatbot after she turns into a Nazi.’ There’s no word on when Tay will return.” So what are we to learn from Tay’s early demise? I’m sure there are lots of lessons, but one big lesson is that we need to program ethics into artificial intelligence (AI) systems. Last year an article from Taylor & Francis posed the question: Is it time we started thinking about programming ethics into our artificial intelligence systems.[3] The article also asks, “While machines are becoming ever more integrated into human lives, the need to imbue them with a sense of morality becomes increasingly urgent. But can we really teach robots how to be good?” That’s a question Microsoft is wrestling with as I write. The article notes there is ongoing research into the topic of chatbots posing as humans. The research concerns whether such chatbots (i.e., computer programs pretending to be humans) aren’t themselves a form of evil since they are purposely trying to deceive people into thinking they are interacting with another person. Clearly, Microsoft’s Tay was a known AI chatbot and there was no intent on Microsoft’s part to deceive people into thinking Tay was a human. And Microsoft was clearly caught off guard when Tay started tweeting vitriol. Peter Lee (@peteratmsr), Corporate Vice President of Microsoft Research, reports, “We’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”[4] Lee goes on to share what Microsoft learned and how it is taking these lessons forward. He writes:

“In the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay. Looking ahead, we face some difficult — and yet exciting — research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”

Robert Walker, an inventor and programmer, reminds us, “[A computer] doesn’t understand anything. All it can do is follow instructions. … Our programs so far are good at many things, and far better at us at quite a few things, but I think fair to say, that they don’t really ‘understand’ anything in the way that humans understand them.”[5] In other words, Walker believes that it the human who creates and runs the artificial program that needs to be ethical, not the machine. I agree with him. Susan Fourtané (@SusanFourtane) reports, however, that some scientists believe that “autonomous, morally competent robots” can be built and that they would be “better moral creatures than we are.”[6] She writes:

“Naturally, to be able to create such morally autonomous robots, researchers have to agree on some fundamental pillars: what moral competence is and what humans would expect from robots working side by side with them, sharing decision making in areas like healthcare and warfare. At the same time, another question arises: What is the human responsibility of creating artificial intelligence with moral autonomy? And the leading research question: What would we expect of morally competent robots?”

The fact that Fourtané writes about having to program robots with a code of ethics supports Walker’s arguments that it is the human programmer, rather than the machine, who must be ethical in the first place. She continues:

“Professors Bertram F. Malle of Brown University and Matthias Scheutz of Tufts University published a research paper … titled ‘Moral Competence in Social Robots.’ They argue that moral competence consists of four broad components.

1. Moral core: ‘A system of norms and the language and concepts to communicate about these norms,’ including moral concepts and language and a network of moral norms
2. Moral action: ‘Moral cognition and affect’ — conforming one’s own actions to the norms
3. Moral cognition and emotion: ‘Moral decision making and action,’ the emotional response to norm violations, and moral judgement
4. Moral communication: Explaining, justifying, negotiating, and reconciling norm violations

Designing autonomous, morally competent robots may be inspiring and fascinating, but it certainly will not be easy.”

Nothing in the arguments presented by Fourtané convinces me that machines are going to be able to develop an autonomous moral core beyond what is programmed into them. In a follow-on article, Fourtané interviewed Professor Scheutz, who all but admits that it is humans, not machines, who must be ethical. Scheutz told Fourtané, “As robot designers, we are responsible for developing the control architecture with all its algorithms that allows for moral reasoning, but we do not make any commitments to the ethical principles with which the robot will be endowed. This is the job of those deploying the robot, i.e., to decide what ethical principles, rules, norms, etc. to include. As a result, the question of responsibility of robot behavior will be a critical one for legal experts to determine ahead of time, especially in the light of instructible robots that can also acquire new knowledge during task performance.”[7]

 

I think it is fair to say that, for the time being, ethics remains something with which humans, not machines, must deal. We can program our ethical rules into machines and can prioritize those rules to help machines decide between conflicting rules, but the machines that carry out their programming will not be making decisions based on some inherent moral core. That means that people should be a lot more concerned about the ethics of the people creating the algorithms than the machines that carry those instructions out. Tay showed us that the worst of humanity is just as capable of teaching machine’s behavior as the best of humanity. It’s a cautionary tale.

 

Footnotes
[1] Anthony Lydgate, “I’ve Seen the Greatest A.I. Minds of My Generation Destroyed by Twitter,” The New Yorker, 25 March 2016.
[2] Alexander Eule, “The Rise and Fall of Microsoft’s Robo Bigot,” Barron’s, 26 March 2016.
[3] Taylor & Francis, “How to train your robot: Can we teach robots right from wrong?Nanowerk, 14 October 2015.
[4] Peter Lee, “Learning from Tay’s introduction,” Official Microsoft Blog, 25 March 2016.
[5] Robert Walker, “Why Computer Programs Can’t Understand Truth – And Ethics Of Artificial Intelligence Babies,” Science 2.0, 9 November 2015.
[6] Susan Fourtané, “Ethical, Autonomous Robots of the Near Future,” EE Times, 14 July 2015.
[7] Susan Fourtané, “Engineering Ethics Into A Robot,” EE Times, 16 July 2015.

Related Posts: