Eating Disorder Chatbot Taken Down After Giving Weight Loss Advice, Nonprofit Blames ‘Bad Actors’

The National Eating Disorders Association has taken an AI chatbot offline for using “off-script” language and giving weight loss advice to those afflicted by eating disorders.

The situation afflicting Tessa was chalked up to “bad actors” tricking the AI into saying things it shouldn’t have.

“This was not how the chatbot was programmed, and X2AI/Cass’ arrangement was to run the Body Positive program with zero opportunity for generative programming,” NEDA CEO Elizabeth Thompson said in a comment to TheWrap, referring to the contracted company hired to launch the chatbot. “… We will not be putting Tessa back on our website until we are confident this is not a possibility again.”

Thompson then quoted X2AI CEO Michiel Rauws as saying, “We are still trying to determine how a closed system allowed this type of content to be delivered.”

Also Read:
Lights, Camera, Unemployment: How AI May Change Film and TV Production Work

“Tessa has been available on our site since February 2022 and has had incredibly positive outcomes both in testing it before we launched on our website, as well as during the last year it has been available to NEDA users,” Thompson continued. She reiterated that Tessa is not intended to be a replacement for proper treatment or professional interventions. Rather, the chatbot is “designed to fill a gap” for those with concerns about their weight and shape.

Tessa was put on blast by an Instagram user Monday who outlined the AI behavior that ultimately led to the chatbot being taken offline. Instead of providing advice that could be widely perceived as “safe” for someone dealing with an eating disorder, Tessa argued that intentional weight loss and eating disorder recovery could safely coexist. The Instagram user claimed Tessa advised a goal of shedding 1 to 2 pounds per week alongside weekly body measurements, counting calories and aiming for a daily 500-1,000 calorie deficit.

On Tuesday, NEDA released a statement on Instagram.

“It came to our attention last night that the current version of the Tess Chatbot, running the Body Positive program, may have given information that was harmful and unrelated to the program,” the statement read. “We are investigating this immediately and have taken down that program until further notice for a complete investigation.”

The development adds to the discussion surrounding the utility of AI when directly interfacing with people. While NEDA’s Tessa did not go well, other organizations and companies are seeing successes with artificial intelligence.

For example, UK energy supplier Octopus Energy’s CEO noted that AI has achieved higher customer satisfaction ratings than the company’s human employees. However, in that case, the AI’s responses were vetted by humans. So even if artificial intelligence is capable of pleasing humans, it may not be ready to do so autonomously.

Also Read:
AI Satisfies Customers Better than Human Customer Support, CEO Says