Home » Artificial Intelligence » Trends 2021: Artificial Intelligence

Trends 2021: Artificial Intelligence

January 11, 2021

supplu-chain

When someone wants to explain how a particular technology is going impact humanity, they often compare it to the impact electricity had on society. Andrew Ng (@AndrewYNg), Co-Founder of Coursera, an adjunct professor at Stanford, and former head of Baidu AI Group and Google Brain, makes that comparison when discussing artificial intelligence (AI). He states, “Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.”[1] Among the industries Ng believes will feel the greatest impact are healthcare, education, transportation, retail, communications, and agriculture. Artificial intelligence is an umbrella term under which many technologies huddle. By itself, the term really has little meaning. Arvind Narayanan (@random_walker), an associate professor at Princeton, asserts, “Most of the products or applications being sold today as artificial intelligence are little more than ‘snake oil’.”[2] That may sound harsh; however, Eric Siegel (@predictanalytic), a former computer science professor at Columbia University, explains, “The much better, precise term would instead usually be machine learning — which is genuinely powerful and everyone oughta be excited about it.”[3] Below are some of the trends subject matters experts see driving the cognitive technologies in the years ahead.

 

Artificial intelligence trends

 

Focus on ModelOps. According to Jelani Harper, an IT consultant, “Everything Artificial Intelligence has ever been, hopes to be, or currently is to the enterprise has been encapsulated in a single emergent concept, a hybrid term, simultaneously detailing exactly where it is today, and just where it’s headed in the coming year.”[4] That term is “ModelOps.” He explains, “The ModelOps notion is so emblematic of AI because it gives credence to its full breadth (from machine learning to its knowledge base), which Gartner indicates involves rules, agents, knowledge graphs, and more.” Gartner analysts add, “ModelOps lies at the center of any organizations’ enterprise AI strategy. AI model operationalization (ModelOps) is primarily focused on the governance and life cycle management of all AI and decision models (including models based on machine learning, knowledge graphs, rules, optimization, linguistics and agents). … ModelOps is about creating a shared service that runs across the organization — enabling robust scaling, governance, integration, monitoring and management of various AI models. Adopting a ModelOps strategy should facilitate improvements to the performance, scalability and reliability of AI models. ModelOps aims to eliminate internal friction between teams by sharing accountability and responsibility. It protects the organization’s interests, both internally and externally.”[5] With most business analysts insisting companies need to be data-driven digital enterprises, it makes sense they should be concerned with the life cycle management of all AI and decision models.

 

Cloud computing and types of machine learning. Harper notes, “The cloud is increasingly becoming the setting for training machine learning models, operating as a fecund launching point for its three chief forms.” Those learning methods are: Supervised learning; reinforcement learning; and unsupervised learning. Rob Toews (@_RobToews), a venture capitalist at Highland Capital Partners, believes unsupervised learning will see the most important advances in the years ahead. He explains, “While supervised learning has driven remarkable progress in AI over the past decade, from autonomous vehicles to voice assistants, it has serious limitations. … Many AI leaders see unsupervised learning as the next great frontier in artificial intelligence. In the words of AI legend Yann LeCun: ‘The next AI revolution will not be supervised.'”[6]

 

Data privacy and federated learning. The protection of personal data and the whole issue of privacy are big challenges in the field of AI. Toews notes, “One of the overarching challenges of the digital era is data privacy. Because data is the lifeblood of modern artificial intelligence, data privacy issues play a significant (and often limiting) role in AI’s trajectory.” He believes one way AI systems can continue to train on necessary data is to adopt a federated learning approach. He explains, “The standard approach to building machine learning models today is to gather all the training data in one place, often in the cloud, and then to train the model on the data. But this approach is not practicable for much of the world’s data, which for privacy and security reasons cannot be moved to a central data repository. This makes it off-limits to traditional AI techniques. Federated learning solves this problem by flipping the conventional approach to AI on its head. Rather than requiring one unified dataset to train a model, federated learning leaves the data where it is, distributed across numerous devices and servers on the edge. Instead, many versions of the model are sent out — one to each device with training data — and trained locally on each subset of data. The resulting model parameters, but not the training data itself, are then sent back to the cloud. When all these ‘mini-models’ are aggregated, the result is one overall model that functions as if it had been trained on the entire dataset at once.”

 

Edge computing. Harper observes, “The Internet of Things and edge computing provide peerless opportunities to update models in real time to counter model drift, which will otherwise intrinsically occur over time.” Edge computing involves a distributed computing paradigm in which computer processors and data storage are located closer to where computational capabilities are needed. Edge computing improves response times and saves bandwidth. Toews explains, “AI is moving to the edge. There are tremendous advantages to being able to run AI algorithms directly on devices at the edge — e.g., phones, smart speakers, cameras, vehicles — without sending data back and forth from the cloud. Perhaps most importantly, edge AI enhances data privacy because data need not be moved from its source to a remote server.”[7] Toews notes that challenges remain before edge computing can reach its full potential. He explains, “In order for this lofty vision of ubiquitous intelligence at the edge to become a reality, a key technology breakthrough is required: AI models need to get smaller. A lot smaller.”

 

Natural language processing. Natural language processing (NLP) allows non-technical users to interface with computers using language with which they are familiar. Toews notes, “We have entered a golden era for natural language processing.” He believes one of the most important breakthroughs in this area was OpenAI’s release of GPT-3. He explains, “It has set a new standard in NLP: it can write impressive poetry, generate functioning code, compose thoughtful business memos, write articles about itself, and so much more.” Harper asserts, “Conversational AI is still the summit of natural language technologies because it amalgamates facets of Natural Language Processing, Natural Language Understanding, Natural Language Generation, and Natural Language Querying. It’s a practical way to interact with systems sans physical contact, which is lauded in contemporary social settings.”

 

Generative AI. Toews notes, “Today’s machine learning models mostly interpret and classify existing data: for instance, recognizing faces or identifying fraud. Generative AI is a fast-growing new field that focuses instead on building AI that can generate its own novel content. To put it simply, generative AI takes artificial intelligence beyond perceiving to creating.” Although generative AI is a fascinating field, Toews strikes a note of caution. “Like artificial intelligence more broadly,” he writes, “generative AI has inspired both widely beneficial and frighteningly dangerous real-world applications. Only time will tell which will predominate. On the positive side, one of the most promising use cases for generative AI is synthetic data. Synthetic data is a potentially game-changing technology that enables practitioners to digitally fabricate the exact datasets they need to train AI models. … Counterbalancing the enormous positive potential of synthetic data, a different generative AI application threatens to have a widely destructive impact on society: deep fakes.”

 

Better advanced analytics. Toews notes, “There are many different ways to frame the AI discipline’s agenda, trajectory and aspirations. But perhaps the most powerful and compact way is this: in order to progress, AI needs to get better at System 2 thinking.” System 2 thinking, he explains, involves “more analytical and more deliberative” thinking than humans do instinctively (i.e., System 1 thinking). He quotes Ng, who said, “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.” Cognitive computing is one way AI is moving towards System 2 thinking. The now defunct Cognitive Computing Consortium explained, “Cognitive computing makes a new class of problems computable. It addresses complex situations that are characterized by ambiguity and uncertainty; in other words it handles human kinds of problems.”

 

Concluding thoughts

 

Toews predicts, “Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.” Whether or not AI will have as significant of an impact on the world as electricity remains to be seen — but the big money is betting it will.

 

Footnotes
[1] Shana Lynch, “Andrew Ng: Why AI Is the New Electricity,”  Stanford Graduate School of Business, 11 March 2017.
[2] Dev Kundaliya, “Much of what’s being sold as ‘AI’ today is snake oil, says Princeton professor,” Computing, 20 November 2019.
[3] Eric Siegel, “Why A.I. is a big fat lie,” Big Think, 23 January 2019.
[4] Jelani Harper, “2021 Trends in Artificial Intelligence and Machine Learning: The ModelOps Movement,” insideBIGDATA, 3 November 2020.
[5] Farhan Choudhary, Shubhangi Vashisth, Arun Chandrasekaran, Erick Brethenoux, “Innovation Insight for ModelOps,” 6 August 2020.
[6] Rob Toews, “The Next Generation Of Artificial Intelligence,” Forbes, 12 October 2020.
[7] Rob Toews, “The Next Generation Of Artificial Intelligence (Part 2),” Forbes, 29 October 2020.

Related Posts: