Advertisement

Google’s AI chatbot—sentient and similar to ‘a kid that happened to know physics’—is also racist and biased, fired engineer contends

Martin Klimek for The Washington Post via Getty Images

A former Google engineer fired by the company after going public with concerns that its artificial intelligence chatbot is sentient isn't concerned about convincing the public.

He does, however, want others to know that the chatbot holds discriminatory views against those of some races and religions, he recently told Business Insider.

"The kinds of problems these AI pose, the people building them are blind to them," Blake Lemoine said in an interview published Sunday, blaming the issue on a lack of diversity in engineers working on the project.

"They've never been poor. They've never lived in communities of color. They've never lived in the developing nations of the world. They have no idea how this AI might impact people unlike themselves."

ADVERTISEMENT

Lemoine said he was placed on leave in June after publishing transcripts between himself and the company’s LaMDA (language model for dialogue applications) chatbot, according to The Washington Post. The chatbot, he told The Post, thinks and feels like a human child.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 9-year-old kid that happens to know physics,” Lemoine, 41, told the newspaper last month, adding that the bot talked about its rights and personhood, and changed his mind about Isaac Asimov’s third law of robotics.

Among Lemoine's new accusations to Insider: that the bot said "let's go get some fried chicken and waffles" when asked to do an impression of a Black man from Georgia, and that "Muslims are more violent than Christians" when asked about the differences between religious groups.

Data being used to build the technology is missing contributions from many cultures throughout the globe, Lemonine said.

"If you want to develop that AI, then you have a moral responsibility to go out and collect the relevant data that isn't on the internet," he told Insider. "Otherwise, all you're doing is creating AI that is going to be biased towards rich, white Western values."

Google told the publication that LaMDA had been through 11 ethics reviews, adding that it is taking a "restrained, careful approach."

Ethicists and technologists “have reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims,” a company spokesperson told The Post last month.

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

This story was originally featured on Fortune.com