What the Eating Disorder Chatbot Disaster Tells Us About AI

hands texting on phone
hands texting on phone

I firmly believe that, just like every sci-fi film has ever predicted, AI is disastrous for humanity. Even if it doesn’t directly turn on us and start killing us like M3GAN, it certainly has the power to get us to turn on each other and harm ourselves—not unlike, say, social media. And it’s already coming for our jobs. First, AI came for McDonald’s drive-thru. Then it came for my job, with ChatGPT and Jasper promising to write provocative prose about the human condition despite having no experience with it whatsoever. Now, AI is trying to replace crisis counselors, but in a not-so-shocking turn of events, the bots appear to lack the proper empathy for the job.

In March, the staffers of the National Eating Disorders Association (NEDA) crisis hotline voted to unionize, NPR reports. Days later, they were all fired and replaced with an AI chatbot named Tessa. While NEDA claimed the bot was programmed with a limited number of responses (and thus wouldn’t start, I don’t know, spewing racial slurs like many chatbots of the past), the bot is still not without its problems. Most importantly, it seems the tech doesn’t quite serve the purpose it was intended to. Fat activist Sharon Maxwell revealed on Instagram that during her interactions with Tessa, the bot actually encouraged her to engage in disordered eating.

Read more

Maxwell claims she told Tessa that she had an eating disorder, and the AI bot replied with tips on how to restrict her diet. The bot reportedly recommended that Maxwell count her calories and strive for a daily deficit of 500-1000 calories. Tessa also recommended that Maxwell weigh herself weekly and even use calipers to determine her body composition.

“If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED,” Maxwell wrote. “If I had not gotten help, I would not still be alive today.”

While Maxwell didn’t provide screenshots of the chatbot’s problematic messages, our sister site Gizmodo found that the bot didn’t know how to respond to rudimentary entries like “I hate my body” or “I want to be thin so badly.”

In the wake of the bad press surrounding Tessa’s dangerous inability to handle its one core function, NEDA announced it had taken down the chatbot and would investigate what went wrong. However, there doesn’t seem to be any way the bot could do any better in the future. Why? Because to be an effective hotline operator requires tailoring one’s response to the individual caller, picking up on cues that tech is incapable of detecting. Even Tessa’s creator told NPR that the chatbot would not interact with callers the same way humans can. Instead, it was designed to provide relevant information as quickly as possible to those who need it most. How it went so off the rails is unknown.

This isn’t the first time that tech has inadvertently posed risk to those experiencing eating disorders. In December, The New York Times reported that TikTok, Gen Z’s social media platform of choice, was suggesting video content encouraging and instructing the viewer on disordered eating within minutes of joining the platform as a new user—even if the user indicated being as young as 13.

Computer chips have never been through pain, nor have they experienced emotions. They lack free will and the ability to think. Replacing humans with AI is primarily a cost-cutting maneuver, and someone in crisis deserves better than to interface with a tool created with the goal of saving time and money.

More from The Takeout

Sign up for The Takeout's Newsletter. For the latest news, Facebook, Twitter and Instagram.

Click here to read the full article.