In an acknowledgement that artificial intelligence (AI) is going to play a significant role in the lives of Americans, the Executive Office of the President (aka the White House) released two documents on the subject. The first document, prepared by the National Science and Technology Council’s Committee on Technology, is entitled Preparing for the Future of Artificial Intelligence. Its companion document, the “National Artificial Intelligence Research and Development Strategic Plan,” lays out a strategic plan for Federally-funded research and development in AI. The Executive Summary of the first document explains why artificial intelligence is receiving so much attention. It states:
“One area of great optimism about AI and machine learning is their potential to improve people’s lives by helping to solve some of the world’s greatest challenges and inefficiencies. Many have compared the promise of AI to the transformative impacts of advancements in mobile computing. Public- and private-sector investments in basic and applied R&D on AI have already begun reaping major benefits to the public in fields as diverse as health care, transportation, the environment, criminal justice, and economic inclusion. The effectiveness of government itself is being increased as agencies build their capacity to use AI to carry out their missions more quickly, responsively, and efficiently.”
Although the study recognizes the significant upside of artificial intelligence developments, it also concedes some negative consequences of AI implementation are going to be felt — especially in the area of jobs sustainability. The Executive Summary states:
“AI’s central economic effect in the short term will be the automation of tasks that could not be automated before. This will likely increase productivity and create wealth, but it may also affect particular types of jobs in different ways, reducing demand for certain skills that can be automated while increasing demand for other skills that are complementary to AI. Analysis by the White House Council of Economic Advisors (CEA) suggests that the negative effect of automation will be greatest on lower-wage jobs, and that there is a risk that AI-driven automation will increase the wage gap between less-educated and more educated workers, potentially increasing economic inequality.”
The report also admits that policymakers have a daunting task ahead of them trying regulate artificial intelligence systems in order to protect individuals and society as a whole from potential harm or abuse. The report states:
“AI has applications in many products, such as cars and aircraft, which are subject to regulation designed to protect the public from harm and ensure fairness in economic competition. How will the incorporation of AI into these products affect the relevant regulatory approaches? In general, the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk that the addition of AI may reduce alongside the aspects of risk that it may increase. If a risk falls within the bounds of an existing regulatory regime, moreover, the policy discussion should start by considering whether the existing regulations already adequately address the risk, or whether they need to be adapted to the addition of AI. Also, where regulatory responses to the addition of AI threaten to increase the cost of compliance, or slow the development or adoption of beneficial innovations, policymakers should consider how those responses could be adjusted to lower costs and barriers to innovation without adversely impacting safety or market fairness.”
Policymakers have always been (and always will be) in a tail chase with advances in technology. They are simply not in a position to anticipate how any given technology might be used or what the consequences of that use might be. Their motto should be “Vigilance and Perseverance.” The report concedes that individuals and corporations involved in the development of artificial intelligence systems must accept initial responsibility when it comes to the safety and ethics of their systems. The report notes:
“Use of AI to control physical-world equipment leads to concerns about safety, especially as systems are exposed to the full complexity of the human environment. A major challenge in AI safety is building systems that can safely transition from the ‘closed world’ of the laboratory into the outside ‘open world’ where unpredictable things can happen. Adapting gracefully to unforeseen situations is difficult yet necessary for safe operation. Experience in building other types of safety-critical systems and infrastructure, such as aircraft, power plants, bridges, and vehicles, has much to teach AI practitioners about verification and validation, how to build a safety case for a technology, how to manage risk, and how to communicate with stakeholders about risk. At a technical level, the challenges of fairness and safety are related. In both cases, practitioners strive to avoid unintended behavior, and to generate the evidence needed to give stakeholders justified confidence that unintended failures are unlikely. Ethical training for AI practitioners and students is a necessary part of the solution.”
Race bias is one concern the study’s authors raised; especially as AI applies to law enforcement. Concerning the issue of bias, Jordan Pearson (@) writes, “In a section on fairness, the report notes what numerous AI researchers have already pointed out: biased data results in a biased machine. For example, artificial intelligence is being used by law enforcement across North America to identify convicts at risk of re-offending and high-risk areas for crime. But recent reports have suggested that AI will disproportionately target or otherwise disadvantage people of colour. … This actually happened, by the way: an AI was tasked with judging a beauty pageant, and picked nearly all-white winners from a pool where people from diverse backgrounds were represented. This principle could play out in day-to-day scenarios like job hunting, too.”
Although the report focuses on artificial intelligence, the most important message it conveys focuses on humans and the impact AI could have on the jobs they fill and the jobs they will lose. In an interview with Wired magazine, President Barack Obama stated, “Issues of choice and free will … have some significant applications for specialized AI, which is about using algorithms and computers to figure out increasingly complex tasks. We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.” The report concludes that government has an important, if necessarily limited, role to play in the development and implementation of AI systems.
“The U.S. Government has several roles to play. It can convene conversations about important issues and help to set the agenda for public debate. It can monitor the safety and fairness of applications as they develop, and adapt regulatory frameworks to encourage innovation while protecting the public. It can provide public policy tools to ensure that disruption in the means and methods of work enabled by AI increases productivity while avoiding negative economic consequences for certain sectors of the workforce. It can support basic research and the application of AI to public good. It can support development of a skilled, diverse workforce. And government can use AI itself to serve the public faster, more effectively, and at lower cost. Many areas of public policy, from education and the economic safety net, to defense, environmental preservation, and criminal justice, will see new opportunities and new challenges driven by the continued progress of AI. The U.S. Government must continue to build its capacity to understand and adapt to these changes.”
The commercial and academic sectors have a much larger and more important role to play in the development and implementation of AI systems. The report states, “As the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations. Researchers and practitioners have increased their attention to these challenges, and should continue to focus on them.” The report concludes, “Developing and studying machine intelligence can help us better understand and appreciate our human intelligence. Used thoughtfully, AI can augment our intelligence, helping us chart a better and wiser path forward.”
 Jordan Pearson, “The White House Wants To End Racism In Artificial Intelligence,” Motherboard, 12 October 2016.
 Joi Ito and Scott Dadich, “Barack Obama, Neural Nets, Self-driving Cars, and the Future of the World,” Wired, October 2016.