How the Mistakes Made By LLM Creators Have Led to Problem Children

Chatbots are talkative but not conscious

  • AI chatbots are generating headlines due to their strange conversations with users.

  • The weirdness is due to the way users interact with chatbots.

  • Most experts agree that chatbots aren’t conscious.

<p>Thatpana Onphalai / Getty Images</p>

Thatpana Onphalai / Getty Images

AI chatbots are getting weird, and some observers say the problem could be the large language models (LLMs) use to create them.

Microsoft's AI Bing chatbot is generating odd or aggressive responses to queries. The software generates text based on prompts and has started rumors that chatbots could become sentient. Not so, said Evan Coopersmith, the executive vice president of data science for the software firm AE Studio.

"With large language models, the process of training is literally a subset of what is known as 'reinforcement learning,'" Coopersmith told Lifewire in an email interview. "Much like parenting, we are reinforcing what we believe is the appropriate behavior."

Runaway Chatbots

When Microsoft released its Bing chatbot recently, the company apparently expected its creation to generate relatively mundane responses to common internet queries. Things started to go off the rails when users got into extended conversations with the chatbot, which is powered by a large language model (LLM), a computer program for natural language processing.

The real problem might be the fact that users are training the chatbot with their responses. Coopersmith likens the chats to a parenting relationship.

"With children, we have millennia to at least offer some data regarding what behaviors we might want to reinforce or punish," he added. "With LLM, we're hoping for the best without a safety net if the model grows up to be bigger, stronger, smarter, and entirely ungovernable."

Dvir Ben-Aroya, the CEO of the email program Spike, said in an email interview that both parenting and talking to a chatbot involve nurturing and guiding an entity toward a specific outcome or goal. "Both require patience, consistency, and repetition to achieve the desired outcome," he said. "Just as parents use positive reinforcement and constructive criticism to guide their children, users of LLMs can provide feedback to help the chatbot learn and improve its language teaching abilities."

But with LLMs, users also need to learn how to talk to the chatbot, while parenting aims to raise a child into a responsible adult (and often is a learning process, as well). What seems like a chat to users is actually a learning exercise for them, Ben-Aroya pointed out.

<p>Om siva Prakash / Unsplash</p>

Om siva Prakash / Unsplash

"They are learning how a machine can use algorithms to understand natural language, recognize patterns, and respond appropriately to prompts," he said. "By using an LLM, users can experience firsthand the ways in which AI is being integrated into everyday life and how it is changing the way we interact with technology."

Conscious or Not?

The incredibly realistic conversations that AI chatbots can produce lead some users to speculate that the software is becoming self-aware. Ex-Google employee Blake Lemoine was fired by the tech giant last year for claiming its AI known as LaMDA had gained consciousness.

The idea that AI is self-aware has been met with scoffs by many computer scientists. Ben-Aroya agreed, saying that while chatbots are designed to simulate conversation, they do not possess self-awareness or consciousness.

"People may sometimes believe that LLMs are conscious due to the use of natural language processing, which can make it seem as though the chatbot is understanding and responding to human speech in a way that is similar to human conversation," Ben-Aroya said. "However, LLMs are programmed to follow a set of rules and algorithms to respond to prompts and do not have the capacity for independent thought or self-awareness."

Michael Littman, a computer science professor at Brown University who studies machine learning and the applications and implications of artificial intelligence, said in a news release that a machine couldn't be self-aware until it starts considering the impact of its actions and whether it will help it achieve its goals.

LLMs are programmed to follow a set of rules and algorithms to respond to prompts and do not have the capacity for independent thought or self-awareness.

"Current programs either have broad exposure to human knowledge but no goals, like these chatbots, or they have no such knowledge but limited goals, like the sort of programs that can play video games," he added.

However, Coopersmith leaves more room for doubt on the AI consciousness question. "My intuition is that LLMs are not yet conscious," he said. "But then, my pet is certain that he is more intelligent than I am, so why should I be so confident that we are more 'conscious' than the algorithm?"