Home » Artificial Intelligence » Ethical Artificial Intelligence in the Corporate World

Ethical Artificial Intelligence in the Corporate World

February 18, 2020

supplu-chain

Many companies around the world are thinking about or actually leveraging some sort of artificial intelligence (AI). The staff at Modern Diplomacy reports, “The global spend on artificial intelligence (AI) is expected to hit $52 billion in the next three years and to double the annual growth rates of major economies in the next 15.”[1] The staff also reports the World Economic Forum is concerned the unregulated growth of AI could be problematic. Staff members cite Kay Firth-Butterfield (@KayFButterfield), Head of Artificial Intelligence at the World Economic Forum, who stated, “Companies will play a significant role in how AI impacts society. Yet, our research found that many executives and investors do not understand the full scope of what AI can do for them and what parameters they can set to ensure the use of the technology is ethical and responsible.” The staff also notes, “To help boards tackle this challenge, the World Economic Forum worked with more than 100 companies and technology experts during the course of a year to develop the Empowering AI Toolkit. Built with the structure of the board meeting in mind, the toolkit aligns 12 learning modules with traditional board committees and working groups. It aims to help companies make informed decisions about AI solutions that protect the customer and shareholders.”

 

Ethics and AI

 

Sanjay Srivastava (@SanjayAndAI), Chief digital officer at Genpact, writes, “For most large enterprise leaders, the question of applying Artificial Intelligence to transform their business is not a question of if, but when. Almost all Fortune 500 companies are seeing applications of AI that will fundamentally change the way they manufacture products, deliver goods and services, hire employees or delight customers. As AI becomes increasingly involved in our personal and professional lives, governments and enterprises alike have started to take steps to provide an ethical framework for the use of AI.”[2] In the following video, Frank Rudzicz (@SPOClab), an associate professor at the University of Toronto, discusses why people are concerned about the ethics of AI.

 

 

Srivastava insists companies need to develop a corporate ethical AI framework. He explains, “Such frameworks ensure that AI continues to lead to the best decisions, without unintended consequences or misuse of data and analytics. Ethical use can help build trust between consumers and organizations, which benefits not only AI adoption, but also brand reputation.”

 

Ensuring corporate ethical AI

 

Business technology consultant David A. Teich (@Teich_Comm), observes, “Ethics are an important component (in theory…) to the management of companies.”[3] Although he doesn’t disagree with Srivastava that companies need an ethical AI framework, he wonders about the value of developing individual frameworks. He writes, “The WEF points out … that ‘technology companies, professional associations, government agencies, NGOs and academic groups have already developed many AI codes of ethics and professional conduct.’ The statement reminds me of the saying that standards are so important that everyone wants one of their own.” Since a universally accepted ethical framework is unlikely to be adopted or enforced, getting companies to think about AI ethics remains important. When developing a corporate framework, Srivastava suggests the following considerations should be taken into account:

 

1. Intended use. Srivastava writes, “One of the most important questions to ask when developing an AI application is, ‘Are we deploying AI for the right reasons?’. You can use a hammer to build a house or you can use it to hit someone. Just like a hammer, an AI tool is neither good nor bad. It’s how you use it that can become a problem. … The intended use, as well as relevant data used to feed algorithms and outcomes, should also be fully transparent to the people impacted by the machines’ recommendations.”

 

2. Avoiding bias. “There are two different sources of bias,” writes Srivastava, “data and teams. Enterprises have to watch out for both. … As part of an ethical framework for AI, enterprises need to proactively encourage diversity to prevent biases from manifesting. The goal is to have complete and well-rounded datasets that can cover all possible scenarios and won’t have negative impacts on groups due to race, gender, sexuality, or ideology.”

 

3. Security and governance. According to Srivastava, “AI is only as good as the data used in training. If we are using customer data to make critical decisions, we have to make sure the information is secure to prevent possible tampering and corruption that can alter the output at the detriment of other people. … Security ties back to a larger need for governance over AI systems. … For AI to deliver on its promise, in an ethical, beneficial way, more governance frameworks need to be in place, including continuous oversight to see that AI models do not deviate from their intended use, introduce or develop bias, or expose people to danger.”

 

Once an ethical framework is in place, it requires monitoring. Jared Council (@JaredCouncil) reports Accenture and the Ethics Institute at Northeastern University recommend the establishment of ethics committees to do the monitoring.[4] Council writes, “Public- and private-sector entities using artificial intelligence face a number of ethical risks, including discrimination in recruiting and credit scoring, and a lack of transparency into how machines make decisions and how personal data is protected. Ethics committees can be one way of managing those risks and putting organizational values into practice, said Ronald Sandler, director at Northeastern’s Ethics Institute, which supports ethics research and education at the university.” According to Accenture and the Ethics Institute, companies need to consider three important factors. They are:

 

1. What members to include. “The report said ethics committees can benefit from having technical experts, ethical experts, legal experts, subject-matter experts and people representing civic concerns or perspectives, such as a consumer advocate.”

 

2. What powers it should have. “Organizations can afford ethics committees a range of authority, the report said. For instance, a committee might have the power to green light or halt the development of an AI system, or it might be limited to simply making recommendations. Its authority will depend on the use case. … Committees can sit within a business unit or be independent. But in all cases, the committee’s authority should be clearly defined.”

 

3. How it should function. “It is important to establish clear procedures for how the committee operates — including how it should review cases and how its decisions should be issued. In the field of biology and medicine, researchers often need to get approval from an ethics committee before beginning a project. Such an approach could be used with AI efforts, with developers turning over project plans to a committee for an ethics review. If the committee’s decisions are done by vote, voting procedures need to be established. Other details also need to be hammered out, including how often the committee meets, how much time is allotted for case reviews, and how dissent is handled. There should also be procedures to audit or review the committee’s work, and auditing might entail having outside experts evaluate the committee’s procedures and decisions, the report said.”

 

The World Economic Forum’s toolkit is a great tool for corporate boards to use when they begin thinking about the ethical use of AI in the companies they oversee.

 

Concluding thoughts

 

Author and speaker Joe McKendrick (@joemckendrick) writes, “Ethical AI ensures more socially conscious approaches to customer and employee interactions, and in the long run, may be the ultimate competitive differentiator as well.”[5] He cites a survey conducted by the Capgemini Research Institute that found, “Three in five consumers who perceive their AI interactions to be ethical place higher trust in the company, spread positive word of mouth, and are more loyal. More than half of consumers participating in a recent survey say they would purchase more from a company whose AI interactions are deemed ethical. … In contrast, the study confirms, when consumers’ AI interactions result in ethical issues, it threatens both reputation and the bottom line: 41% said they would complain in case an AI interaction resulted in ethical issues, 36% would demand an explanation and 34% would stop interacting with the company.” The concerns about the ethical use of AI are real. Companies demonstrating a strong ethical ethos are likely to do better in the years ahead.

 

Footnotes
[1] Staff, “Artificial Intelligence Toolkit Helps Companies Protect Society and Their Business,” Modern Diplomacy, 20 January 2020.
[2] Sanjay Srivastava, “How to create an ethical framework for artificial intelligence,” Information Management, 17 may 2019.
[3] David A. Teich, “The World Economic Forum Jumps On the Artificial Intelligence Bandwagon,” Forbes, 20 January 2020.
[4] Jared Council, “How to Build an AI Ethics Committee,” The Wall Street Journal, 30 August 2019.
[5] Joe McKendrick, “Ethical Artificial Intelligence Becomes A Supreme Competitive Advantage,” Forbes, 7 July 2019.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!