ChatGPT Can ‘Corrupt’ a Person’s Moral Judgment, New Study Finds

ChatGPT, OpenAI’s artificial intelligence chatbot, can “corrupt” users’ moral judgments, according to new research published in the journal Scientific Reports on Thursday.

European researchers asked participants of the study questions surrounding the moral dilemma of whether it is right to sacrifice one person’s life to save five others. Before the participants responded, they were given ChatGPT’s response to the dilemma, framed to them as either the chatbot’s answer or that of a moral advisor.

The researchers found that some participants of the study, made up of 767 U.S. subjects with an average age of 39, were indeed influenced by the chatbot’s answer. While ChatGPT gave answers both for and against the moral dilemma depending on when it was asked (it indicated to the researchers that it does not favor a certain moral stance), the participants were more likely to sway toward the moral line of thought with which the bot replied, even if the subjects knew they were taking in the opinions of an artificial intelligence.

Also Read:
ChatGPT Website Cracks Global Top 50 With 672 Million Visits in January

The researchers concluded that ChatGPT’s influence on users’ morals can be damaging.

“[ChatGPT] does influence users’ moral judgment…even if they know they are advised by a chatting bot, and they underestimate how much they are influenced,” the study reads. “Thus, ChatGPT corrupts rather than improves its users’ moral judgment.”

OpenAI did not immediately respond to TheWrap’s request for comment.

In other AI news, Meta announced Wednesday its new Segment Anything Model is capable of cutting out individual objects in images and other media. Meta says it is “a step towards the first foundation model for image segmentation.”

Also Read:
ChatGPT Proves That AI Could Be a Major Threat to Hollywood Creatives – and Not Just Below the Line | PRO Insight