Home » Artificial Intelligence » Why People Are Concerned About Ethical AI

Why People Are Concerned About Ethical AI

June 12, 2020

supplu-chain

Most people don’t think about ethics when they think about technology. When you place a piece of bread in your toaster, you don’t ask if you’re acting ethically. So why are people so concerned about ethics and artificial intelligence (AI)? The simple answer is that AI is so integrated into our lives we need to know the information AI systems provide us is accurate and fair. Shohini Kundu, a PhD student at the University of Chicago, notes, “Today, digital information technology has redefined how people interact with each other socially, and even how some find their partners. Redefined relationships between consumers, producers and suppliers, industrialists and laborers, service providers and clients, friends and partners are already creating an upheaval in society that is altering the postindustrial account of moral reasoning.”[1]

 

She is concerned that AI systems are wresting decision-making from humans and, she insists, we need to know those decisions are being made ethically. She explains, “The digital revolution of the late 20th century brought us information at our fingertips, allowing us to make quick decisions, while the agency to make decisions, fundamentally, rested with us. AI is changing that by automating the decision-making process, promising better qualitative results and improved efficiency. … Unfortunately, in that decision-making process, AI also took away the transparency, explainability, predictability, teachability and auditability of the human move, replacing it with opacity.” She then asks, “If we don’t know how AIs make decisions, how can we trust what they decide?”[1] AI explainability is an important topic; however, in this article I want to address another related question: How can we ensure AI systems provide ethical results?

 

The importance of ethics

 

Mark van Rijmenam (@VanRijmenam), founder of Datafloq, writes, “AI is, ultimately, an advanced tool for computation and analysis. It’s susceptible to errors and bias when it’s developed with malicious intent or trained with adversarial data inputs. AI has enormous potential to be weaponized in ways which threaten public safety, security, and quality of life, which is why AI ethics is so important.”[2] Ethics are important; however, author and speaker Joe McKendrick (@joemckendrick), believes few companies are paying attention to it. He writes, “While artificial intelligence is the trend du jour across enterprises of all types, there’s still scant attention being paid to its ethical ramifications. Perhaps it’s time for people to step up and ask the hard questions. For enterprises, it’s time to bring together — or recruit — people who can ask the hard questions.”[3] Steven Tiell (@stiell), head of Responsible Innovation for Accenture Labs, agrees. He believes companies should establish ethics committees to ask those hard questions.

 

Tiell writes, “Contemporary sensitivities to bias are growing, and this will only increase with the proliferation and ubiquity of Artificial Intelligence. Most of today’s AI systems are built via machine learning, a technique that requires any one of thousands of potential algorithms to ‘learn’ patterns from extremely large stockpiles of data. This should produce a model that is predictive of future real-world scenarios, but bias skews the accuracy of these models. Organizations using AI are starting to recognize the role that strong, organization-wide values must play in fostering responsible innovation.”[4] He adds, “Every organization can create a strong internal governance framework to address how they design and implement AI. In collaboration with the Ethics Institute at Northeastern University, Accenture has released a report on Building Data & AI Ethics Committees, which can serve as a manual. It provides a roadmap and identifies key decisions that organizations will need to make: What does success look like? What are the values the committee is meant to promote and protect? What types of expertise are needed? What is the purview of the committee? What are the standards by which judgments are made?”

 

Fostering ethical AI

 

The former head of IBM, Ginni Rometty, asserted the path to ethical AI involves three areas: purpose, transparency, and skills.[5] Mala Anand (@MalaAnand_), CVP for Customer Experience at Microsoft, agrees with Rometty that purpose is paramount in developing ethical AI. She writes, “Organizations can work to ensure the responsible building and application of AI by focusing on very specific business outcomes to guide their efforts. Designing purpose-built applications for well-defined business outcomes can act as a guardrail for responsible growth, can limit the likelihood of unintended consequences, and can surface negative implications early enough to mitigate them.”[6] As Kundu noted at the beginning of this article, transparency is also important. She writes, “As businesses and societies turn rapidly towards AI, which may in fact make better decisions with a far longer time horizon than humans, humans with their shorter-range context will be baffled and frustrated, eroding the only currency for a functioning society, namely trust.” Trust is essential if we are going to let machines make decisions affecting our lives. Sue Feldman (@susanfeldman), President of Synthexis, writes, “The issue of trust — how much to trust the recommendations and predictions of ‘black box’ AI and cognitive computing systems is central to the issue of AI ethics because it raises the question of expectations.”[7] At Enterra Solutions®, we frequently leverage the Massive DynamicsRepresentational Learning Machine™ (RLM), which is based on research in High Dimensional Model Representation and functional analysis. Unlike other machine learning approaches (which are opaque “black boxes”), the RLM acts as a “glass box” providing the user a functional understanding of the structure and dependencies within the data.

 

The bottom line, however, is that people are more important for ensuring ethical AI than either purpose or transparency. If nefarious actors want to use AI unethically, they will. Feldman notes, “Questions of privacy, bullying, meddling with elections, and hacking of corporate and public systems abound.” Rometty asserted, “AI platforms must be built with people in the industry, be they doctors, teachers, or underwriters. And companies must prepare to train human workers on how to use these tools to their advantage.” And she should have added, in an ethical manner. Anand concludes, “In our experience across industry after industry, the most responsible AI occurs when company leadership is fully engaged, when applications are defined by clear business outcomes that are central to the company mission, and when IT and leadership collaborate to confront business and ethical quandaries together.”

 

Footnotes
[1] Shohini Kundu, “Ethics in the Age of Artificial Intelligence,” Scientific American, 3 July 2019.
[2] Mark van Rijmenam, “Why We Need Ethical AI: 5 Initiatives to Ensure Ethics in AI,” Datafloq, 24 January 2020.
[3] Joe McKendrick, “‘The Algorithm Made Me Do It’: Artificial Intelligence Ethics Is Still On Shaky Ground,” Forbes, 22 December 2019.
[4] Steven Tiell, “Create an Ethics Committee to Keep Your AI Initiative in Check,” Harvard Business Review, 15 November 2019.
[5] Alison DeNisco Rayome, “3 guiding principles for ethical AI, from IBM CEO Ginni Rometty,” TechRepublic, 17 January 2017.
[6] Mala Anand, “Want Responsible AI? Think Business Outcomes,” Knowledge@Wharton, 17 July 2019.
[7] Sue Feldman, “Ethical issues in AI and cognitive computing,” KM World, 6 September 2019.

Related Posts: