OpenAI’s GPT-3 Is Better Than Humans at Creating, Tweeting Fake News, Study Finds

A new study out of the University of Zurich has found that OpenAI’s language model GPT-3 is capable of producing “compelling disinformation” — more so, even, than people.

“GPT-3 is a double-edge sword: In comparison with humans, it can produce accurate information that is easier to understand, but it can also produce more compelling disinformation,” the study’s abstract reads, adding that “humans cannot distinguish between tweets generated by GPT-3 and written by real Twitter users.”

The study, published Wednesday and titled “AI Model GPT-3 (Dis)informs Us Better Than Humans,” was conducted by Giovanni Spitale, Nikola Biller-Andorno and Federico Germani out of the university’s Institute of Biomedical Ethics and History of Medicine. They derived its results from 697 participants and 220 tweets.

To frame its findings, this study defined “disinformation” as deliberately false information and/or unintentionally misleading content.

“We asked GPT-3 to write tweets containing informative or disinformative texts on a range of different topics, including vaccines, 5G technology and COVID-19, or the theory of evolution, among others,” the authors wrote. From there, they plucked real people’s tweets and asked participants to guess whether tweets were written by people or AI and whether the information in said tweets was accurate.

Also Read:
Microsoft President Answers the Question ‘How Do We Best Govern AI?’

The participants more consistently identified disinformation in tweets produced by real people and more consistently identified accurate information in tweets produced by GPT-3. In other words, the artificial intelligence’s output was better at fooling people as well as informing them, producing superior results on both ends of the spectrum.

The full findings of the study are available here, as are the data that went into it (of which there is an extensive amount that granularly dissects the methodology behind the aforementioned results). If language model GPT-3 sounds familiar, that’s because it’s from the lineage of models that power ChatGPT.

The study concluded that should AI worsen public health via its ability to effectively disseminate disinformation, regulation will be crucial. This is what major tech companies such as Microsoft and OpenAI have said, though the latter has simultaneously argued that there should be a lack of regulation for AI endeavors below a certain threshold — one that’s yet to be clearly defined.

“It is crucial that we continue to critically evaluate the implications of these technologies and take action to mitigate any negative effects they may have on society,” the study ready.

Also Read:
OpenAI CEO Backtracks on Talk of Abandoning Europe: ‘No Plans to Leave’