Home » Artificial Intelligence » Artificial Intelligence Augmentation and the Future Workforce

Artificial Intelligence Augmentation and the Future Workforce

February 2, 2018

supplu-chain

According to Kris Subramanian (@Kris_O3) and Subin Perumbidy, Co-Founders of Option3, “Enterprise technology is on the cusp an inflection point afforded by the leaps made in automation to re-shape how businesses operate in the recent years.”[1] Many analysts see this “inflection point” as the beginning of the end for human workers. Hardly a day goes by without a headline proclaiming automation is going to eliminate millions of jobs now being performed by humans. Automation isn’t new; it has been around since humans invented the first machine to take over a tedious and repetitive job. As technologies have improved, the number of jobs eliminated have increased. Historically, however, technologies have produced more jobs than they have eliminated.

 

The Cognitive Era

 

Some analysts believe the next wave of technology will be different. They fear more jobs will be eliminated than created — resulting in social unrest and economic turmoil. The technology often pointed to as the driving force of this transition is artificial intelligence (AI) or a subset of AI called cognitive computing. This technology is deemed so dominant that many analysts are calling the coming years the Cognitive Era. Dan Pontefract (@dpontefract), Chief Envisioner of TELUS Transformation Office, writes, “Several large-scale organizations have begun to align around a concept known as the ‘cognitive era.’ Based on artificial intelligence and advanced cognitive systems, ‘cognitive era’ technology has the ability to make judgments and form hypotheses based on the synthesis of ‘big data.’ Furthermore, cognitive systems have the ability both to learn and to adapt their decisions and line of thinking. What’s not to like about the so-called ‘cognitive era’?”[2]

Pontefract is aware business executives have historically pursued profits regardless of social consequences. Such strategies have often resulted in strikes and violence. Considering the potential consequences of the cognitive era, he asks, “Do those in charge of the ‘cognitive era’ possess ‘hearts and minds’ in their work?” In fact, many of them do. Amazon, Facebook, Google, IBM, and Microsoft have “created an organization to set the ground rules for protecting humans — and their jobs — in the face of rapid advances in artificial intelligence. The Partnership on AI, unites [these companies] in an effort to ease public fears of machines that are learning to think for themselves and perhaps ease corporate anxiety over the prospect of government regulation of this new technology.”[3] Pontefract also points out IBM believes the cognitive era should pursue the goal to “augment human intelligence.”

 

Are Humans Destined to become Cyborgs?

 

Pontefract writes, “We need to put human intelligence before artificial intelligence.” I believe the term “augmenting human intelligence” does that; but, augmentation can take several forms. At Enterra Solutions®, cognitive computing refers to software that combines humanlike reasoning with cutting-edge mathematics, wrapped in natural language, to solve complex problems. Natural language processing is involved in order to ensure that humans can use cognitive computing as a tool to enhance their decision making. Along those same lines, Subramanian and Perumbidy note that Robotic Process Automation “frees up a significant amount of [workers’] time which allows them to focus on more productive activities and makes the workforce move up the value chain with the technology to augment their efforts and reduce the workloads.”

 

Irving Wladawsky-Berger, a former IBM executive, notes, “Few organizations have … integrated the large amounts of data, analytical tools and powerful AI systems now at our disposal into their decision making systems.”[4] Like Pontefract, Wladawsky-Berger insists, “Technology alone won’t determine the success of a human-AI decision system.” Shan Carter (@shancarter), a member of the Google Brain team, and Michael Nielsen (@michael_nielsen), a researcher at Y Combinator, believe we need to create a new mindset about the purpose of computers. They suggest this new field of research be called artificial intelligence augmentation (AIA). They write, “While cognitive outsourcing is important, this cognitive transformation view offers a much more profound model of intelligence augmentation. It’s a view in which computers are a means to change and expand human thought itself.”[5]

 

Some analysts believe humans must transform to keep up with technology and take augmentation to the next level. Katie Collins writes, “Some entrepreneurs are looking into augmenting our brains so that we can develop superintelligence. The procedure will likely involve implanting technology into our brains, creating what’s known as a brain-computer interface.”[6] Bryan Johnson, CEO of Kernel, a companying developing brain/computer interfaces, has stated, “My biggest concern is we don’t have the ability to cooperate [with computers]. If we cooperate, we can solve problems. What I want is us to be in the game of solving problems.”[7]

 

Summary

 

Brookings analysts note, “Workers of every stripe — from corporate finance officers to sales people to utility workers and nurses — are now spending sizable portions of their workdays using tools that require digital skills.”[7] In the future, the most important of those tools will be cognitive computing systems. Wladawsky-Berger points to studies showing collaborations between people and AI systems are more effective than either humans or machines on their own. He writes, “An effective human-AI decision system should have access to large numbers of people with expertise in the problem being addressed, not only across the overall organization but beyond, since complex problems increasingly involve collaborations across multiple institutions. It would then use machine-learning-like AI algorithms to assemble the appropriate teams with the expertise required to address a particular complex problem, and provide them with the right tools to securely share data and ideas.” He cites a study by MIT Media Lab professor Sandy Pentland and his associates, MIT professor Tom Malone and CMU professor Anita Woolley, in which they concluded, “This approach should provide the organizational scale and flexibility required for cross-domain decisions and also the agility to interoperate at the speed of competition in the future.” The bottom line is this: Business leaders need to learn how to ensure their human and virtual workers can best collaborate in the years ahead.

 

Footnotes
[1] Kris Subramanian and Subin Perumbidy, “Striking The Balance Between A Human And Virtual Workforce For Organisations,” Inc42, 30 December 2017.
[2] Dan Pontefract, “Of Hearts And Minds And The Cognitive Era,” Forbes, 12 December 2017.
[3] Steve Lohr, “Protecting Humans and Jobs From Robots Is 5 Tech Giants’ Goal,” The New York Times, 28 September 2016.
[4] Irving Wladawsky-Berger, “Building an Effective Human-AI Decision System,” The Wall Street Journal, 1 December 2017.
[5] Shan Carter and Michael Nielsen, “Using Artificial Intelligence to Augment Human Intelligence,” Distill, 4 December 2017.
[6] Katie Collins, “As AI and robots rise up, do humans need an upgrade too?” C|NET, 13 December 2017.
[7] Ibid.
[8] Mark Muro, Sifan Liu, Jacob Whiton, and Siddharth Kulkarni, “Digitalization and the American workforce, “ Brookings Institution, November 2017.

Related Posts: