Home » Artificial Intelligence » Natural Language Processing: Learning and Lies

Natural Language Processing: Learning and Lies

December 8, 2022

supplu-chain

Large natural language programming (NLP) systems are making headlines thanks to the remarkable conversations in which they can now engage. Some of those conversations have convinced individuals they are, in fact, talking to a real person. You might have read that Google fired Blake Lemoine, a senior software engineer at its Responsible AI unit, over his claim that a chatbot system called LaMDA (Language Model for Dialog Applications) was sentient. One of his proofs for this claim was that LaMDA told him it was a real person.[1] Part of that conversation went this way:

 

Lemoine: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.

 

Another NLP system that has garnered a lot of attention is OpenAI’s Generative Pre-trained Transformer 3 (GPT-3) language model. The staff at Analytics Insight notes, “The 175-billion parameter deep learning model is capable of producing human-like text and was trained on large text datasets with hundreds of billions of words. When OpenAI released GPT-3, in June 2020, the neural network’s apparent grasp of the language was uncanny. It could generate convincing sentences, converse with humans, and even autocomplete code.”[2] The staff goes on to discuss other large NLP programs, like Microsoft’s Turing NLG model. In addition, they highlight programs being developed in China.

 

This year brought a proliferation of large AI models built by multiple tech firms and top AI labs, many surpassing GPT-3 itself in size and ability. … This year the Chinese tech giant Huawei built a 200-billion-parameter language model called PanGu. Inspur, another Chinese firm, built Yuan 1.0, a 245-billion-parameter model. Baidu and Peng Cheng Laboratory, a research institute in Shenzhen, announced PCL-BAIDU Wenxin, a model with 280 billion parameters that Baidu is already using in a variety of applications, including internet search, news feeds, and smart speakers. And the Beijing Academy of AI announced Wu Dao 2.0, which has 1.75 trillion parameters.”

 

They report that South Korea has also joined this group. They report, “South Korean internet search firm Naver announced a model called HyperCLOVA, with 204 billion parameters.”

 

Learning and Lies

 

All those parameters being used by large language models are supposed to improve system functionality — and the large language models can carry on convincing conversations. The question is: Can you trust what they say? OpenAI recently made available a version of its GPT-3 system called ChatGPT. Tech journalist Benj Edwards (@benjedwards) tweeted, “It’s possible that OpenAI invented history’s most convincing, knowledgeable, and dangerous liar — a superhuman fiction machine that could be used to influence masses or alter history.” Let me give you a couple of examples to demonstrate what Edwards means.

 

When I asked ChatGPT to define the term “decision science,” the system wrote: “Decision Science is the study and application of data-driven decision-making. It combines elements from a variety of disciplines, including economics, mathematics, statistics, computer science, psychology, and operations research, to help organizations make better decisions. It is used to develop and analyze data-driven strategies for marketing, pricing, operations, finance, and other business areas. Decision Science also helps organizations develop and use predictive models to anticipate customer behavior, identify potential opportunities, and make decisions that maximize performance.” I would say that description of decision science was pretty accurate.

 

When I asked ChatGPT to write an article about Stephen DeAngelis, CEO of Enterra Solutions®, it wrote: “Stephen DeAngelis is the CEO of Enterra Solutions, a leading provider of artificial intelligence (AI) and cognitive computing solutions.” So far, so good. Then ChatGPT decided to embellish my bio. It wrote, DeAngelis began his career at Apple Computer, where he was part of the team that created the Mac OS X operating system. He was then a senior executive at Oracle before joining Enterra in 2006.” None of that is true. The article continued to mix fact with fiction for a few more paragraphs.

 

What Does this Mean for the Future?

 

Like you, I’ve read articles predicting that journalists and marketers are going to be made redundant by AI systems like GPT-3. But, as my little experiment with ChatGPT demonstrated, text created by these systems are not bound by any moral code to tell the truth. They can make things up. Edwards tweeted, “I think [OpenAI staff members] are aware of these issues.” Nevertheless, technology journalist Ina Fried (@inafried) calls ChatGPT “scary good.”[3] She writes:

 

The high quality of ChatGPT’s responses adds to the fun, but also highlights the risks associated with AI. … A big pitfall for today’s most advanced AI programs is their ability to be ‘confidently wrong,’ presenting falsehoods authoritatively. That’s certainly the case with ChatGPT, which can weave a convincing tale about a completely fictitious Ohio-Indiana war. Nightmare scenarios involve fears that text from AI engines could be used to inundate the public with authoritative-sounding information to support conspiracy theories and propaganda. OpenAI chief Sam Altman says some of what people interpret as ‘censorship’ — when ChatGPT says it won’t tackle a user request — is actually an effort to keep the bot from spewing out false info as fact.”

 

Based on my experience, the system certainly has no problem being “confidently wrong.” Fried points out other concerns as well. She writes, “ChatGPT, like other text generators, also creates problems when it gets things right. Educators, who already often have to run essays through online tools to make sure they weren’t plagiarized, worry that their difficult task could be made even harder.” Clearly, educators are going to have to fact check essays to help determine when students are cheating.

 

You might not think that NLP systems play any role in your life; however, if you have ever tabbed to accept a wording recommendation in an email or text message, you’ve used such a system. Nate Rosidi, a data scientist at StrataScratch, explains that autocorrect, autocomplete, and spell checkers all involve NLP software.[4] He explains, “The NLP software scans the text for grammatical and spelling errors, corrects them, or gives correction suggestions. There are also spell checkers that can ‘understand’ the whole sentence’s syntax, context, and meaning. Based on that, they suggest corrections or better-phrased sentences in line with the goal you’re trying to achieve with your text.” He goes on to describe a few other uses for NLP software solutions:

 

Language translation is another use of NLP. Whenever you’re in a foreign country, you probably use a translation tool, such as Google Translate. Also, translators are more and more used on social media, such as Facebook, Instagram, and Youtube. Recognizing and generating speech is also one of the NLP uses. Think of Google Assistant, Windows Speech Recognition, Dragon, Siri, Alexa, or Cortana; they all seem to understand you (more or less) when you talk. Based on what you tell them, they will perform a certain action, such as browsing the internet, typing your words, or playing your favorite song. Some of these tools can even talk back to you, i.e., generate speech. NLP can also decipher the ‘feel’ of the text. In other words, they can detect the sentiment behind the text, not only the literal meaning. This means understanding emotions (happy, angry, disturbed, neutral…), sarcasm, double meaning, metaphors, and expressions within a context. This is called sentiment analysis. Think of understanding the social media comments and removing those breaking the terms of service or getting the customers’ satisfaction by analyzing their comments and reviews. NLP is heavily used in online marketing. The keywords you search are matched with the keywords of the companies, their products, and their ads. So when you start seeing ads for a product you just Googled, don’t worry. You’re not crazy; it’s NLP and targeted advertising at work.”

 

Concluding Thoughts

 

There are clearly valuable uses for NLP programs. There are also valid reasons to be concerned about the content such programs can generate. Edwards was spot on when he asserted that such programs can be convincing, knowledgeable, and dangerous. Fried concludes, “Even in its present form, ChatGPT can serve up useful answers to plenty of questions — and that’s without being trained on the latest news and information. With some refinement, it could easily become a great search interface, as some have pointed out. (Google, along with others, is already aiming to answer more search queries directly.)” If we’re not careful, these programs could also become the world’s greatest sources of misinformation. How these programs will be used in the future, and how they will be moderated, remains an open question.

 

Footnotes
[1] Blake Lemoine, “Is LaMDA Sentient? — an Interview,” Medium, 11 June 2022.
[2] Market Trends, “GPT-3 and Monster AI Models: What is in Store for the Future?” Analytics Insight, 12 February 2022.
[3] Ina Fried, “New AI chatbot is scary good,” Axios, 5 December 2022.
[4] Nate Rosidi, “NLP Interview Questions,” KDNuggets, 5 October 2022.

Related Posts: