The late C. S. Lewis once stated, “Education without values, as useful as it is, seems rather to make man a more clever devil.” In today’s world, where we are educating computers as well as students, many people are concerned that computers aren’t being taught values.

Ensuring AI Doesn’t Become a Clever Devil

Stephen DeAngelis

February 24, 2021

The late C. S. Lewis once stated, “Education without values, as useful as it is, seems rather to make man a more clever devil.” In today’s world, where we are educating computers as well as students, many people are concerned that computers aren’t being taught values. They are concerned artificial intelligence (AI) systems will turn out to be the kind of clever devils about which Lewis was concerned. Tech writer Thomas Macaulay (@thomas_macaulay) believes many current AI ethics efforts are a sham. He asserts, “Amid a growing backlash over AI‘s racial and gender biases, numerous tech giants are launching their own ethics initiatives — of dubious intent. The schemes are billed as altruistic efforts to make tech serve humanity. But critics argue their main concern is evading regulation and scrutiny through ‘ethics washing.’”[1] He rhetorically asks, “At least we can rely on universities to teach the next generation of computer scientists to make. Right?” His answer, “Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda. Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject.” Since AI ethics begins with the people who develop and deploy algorithms, critics see this lack of training as a critical shortcoming.

The need for AI ethics

Several years ago, at the First Annual Enterra Solutions® Cognitive Computing Summit, I asked the Venerable Tenzin Priyadarshi, Director of the Ethics Initiative at The Dalai Lama Center for Ethics and Transformative Values at MIT, to address the topic of ethics as it relates to artificial intelligence. Some skeptics in the audience openly questioned the need for ethics in the field of AI. They argued AI is just a technology, like a toaster, and no one believes toasters need to be ethical. Priyadarshi did an excellent job responding. He stated the Dalai Lama Center views ethics as “an optimization of human behavior” rather than a constraint. He pointed to the automotive sector as a good example of why ethics are important and to the pursuit of driverless vehicles. Priyadarshi asked, “What should the AI governing an autonomous car do if faced with either running over three pregnant women and their three toddlers or self-destructing, killing the car’s owner? The ethical answer is to self-destruct; but, members of the public are unlikely to purchase a car with such programming, knowing that it could make them victims. To avoid forcing members of the car-buying public to choose, manufacturers are selling cars that are assistive, not autonomous.” Of course, automakers are still pursuing autonomous vehicles; and, that’s the point. AI systems make decisions and provide insights upon which decisions are based — and decisions have consequences.

Sources of bias

There are primarily two sources of bias that can have ethical implications. The first source is data bias. Often, AI bias originates in the data from which it learns. Pierre DeBois (@ZimanaAnalytics), founder of Zimana Analytics, explains, “Central to any AI discussion is data. Data is essential, of course, for AI-powered automation — but in and of itself, data has ambiguity built in.”[2] DeBois points out that bias can have consequences in areas like marketing. He explains, “Because of [data] ambiguity, ethical concerns can arise from its usage when AI is applied to marketing decisions. The consequences may not be apparent until the model is operational. The public may not fully understand the AI mechanisms behind the media they encounter. But the real-world outcome from the AI becomes what customers use to evaluate the experience — and judge the consequences.”

 

The second potential source of bias is algorithms, the instructions that tell a computer how to deal with data. R.J. Talyor (@rjtalyor), founder and CEO of Pattern89, explains, “AI has the power to analyze billions of data points in the blink of an eye and translate them into actionable insights. … As AI becomes more common across multiple industries, ethical questions surrounding its creation, transparency and bias become more pressing.”[3] With so many opportunities for bias to arise, journalist David Hardoon rhetorically asks, “CAN Artificial Intelligence be moral?”[4] His quick response, “In my opinion, no.” He then asks, “Should this prevent us from establishing how to morally use AI? ” His answer, “Absolutely not. In fact, the absence of AI moral capability should drive our need for explicit and clear frameworks for the moral use of AI outputs. I use the term ‘moral’, somewhat sensationally, to emphasize the use of AI as a tool of judgment (decision making or decision support) where outcomes need to adhere to principles of ‘right’ and ‘wrong’. However, in reality, such polarity is not always practicable and the terms ‘ethical’ and ‘fair’ are more familiar and more commonly used.”

Making AI ethical

In 2019, the European High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. “According to the Guidelines, trustworthy AI should be: (1) lawful — respecting all applicable laws and regulations; (2) ethical — respecting ethical principles and values; [and] (3) robust — both from a technical perspective while taking into account its social environment.”[5] The Guidelines note, “Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.” To help achieve this aim, the Guidelines insist humans need to be involved in all aspects of AI development and implementation. “AI systems should empower human beings,” they state, “allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.”

 

Of course, not all the decisions AI systems make or facilitate have moral or ethical dimensions. When ethical considerations are required, some companies may be ill-equipped to deal with them. Fortunately, reports Karen Hao (@_KarenHao), a journalist specializing in AI, new companies have emerged that can help ensure AI systems are more ethical.[6] She writes, “A growing ecosystem of ‘responsible AI’ ventures promise to help organizations monitor and fix their AI models.” One such company, Parity, founded by a former Accenture analyst named Rumman Chowdhury, helps clients “identify how they want to audit their model — is it for bias or for legal compliance? — and then provides recommendations for tackling the issue.” Hao adds, “Parity is among a growing crop of startups promising organizations ways to develop, monitor, and fix their AI models. They offer a range of products and services from bias-mitigation tools to explainability platforms.” If your company is uncertain whether ethical considerations are involved in your AI systems, asking an outsider may be a good idea.

Concluding thoughts

Companies often worry how the discovery of bias in their AI systems will affect their reputations. Ethicists worry about how biased AI systems will affect society. By ensuring concerns about bias are dealt with from the beginning of AI projects, companies can help ensure they don’t become clever devils in the end.

 

Footnotes
[1] Thomas Macaulay, “Study: Only 18% of data science students are learning about AI ethics,” The Next Web, 3 July 2020.
[2] Pierre DeBois, “How to Make Sure Your AI is Ethical: Analytics Corner,” DMN, 21 August 2019.
[3] R.J. Talyor, “Implementing Ethical Artificial Intelligence,” Pipeline, 8 December 2019.
[4] David R Hardoon, “Can Artificial Intelligence be moral?” The Business Times, 6 January 2021.
[5] European High-Level Expert Group on AI, “Ethics guidelines for trustworthy AI,” European Commission, 8 April 2019.
[6] Karen Hao, “Worried about your firm’s AI ethics? These startups are here to help.” MIT Technology Review, 15 January 2021.

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!