This Story Was Not Written by a Robot: AI and the Future of News Media

This is Part Three of a series about AI and its impact on Hollywood. Keep reading WrapPRO for upcoming stories on AI and screenwriting, acting and production. Level up your entertainment career and subscribe!

Several months ago, the tech news site CNET began running scores of personal finance explainers such as “What is compound interest?” and “How to cash a check without a bank account.” There was nothing particularly noteworthy about these stories — “Compound interest can be a great way to increase your savings over time,” one of them blandly reported — except for three pretty remarkable things.

First, many of these 75-plus pieces were laden with errors. Second, some appeared to have been plagiarized. And third, none of them were written by humans, but instead by an internally designed AI engine, according to a Futurism.com investigation as well as the site itself.

In an age-old bid to save time and money, newsrooms around the country are increasingly dabbling with AI, from making small additions to an article to trying to replace the human writer outright. So far the results have been decidedly mixed. Other publications such as Men’s Journal have also had to correct AI-generated errors as their misplaced trust, or lack of appropriate checks, have come back to haunt them.

Will AI threaten the core of our news-gathering culture, which already has been challenged in the past two decades by the Internet? Not immediately, even though news organizations are experimenting widely… and, some believe, recklessly.

“When you put every truck driver in America out of work, sure you’ve saved your company some money,” said Gabriel Kahn, professor at the USC Annenberg School of Journalism, “but you’ve also just caused widespread economic chaos across the United States.”

Also Read Part One of WrapPRO’s AI Series:
AI and the Rise of the Machines: Is Hollywood About to Be Overrun by Robots?

Learning the uses — and misuses — of AI

“Journalists and especially publishers of journalism need to know what generative [AI] models are good at doing and what they’re bad at doing,” said Jeremy Gilbert, the Knight Chair in Digital Media Strategy for Northwestern University’s Medill School of Journalism and the former director of strategic initiatives at the Washington Post. “They are bad at facts. They are bad at math.”

Generative AI tools like ChatGPT, a chatbot that allows users to enter prompts to receive novel, human-like text, videos or images, are good at imitating writing but they don’t do any reporting at all, Gilbert noted. (Generative AI refers to a form of artificial intelligence that creates new content based on a prompt.) They also don’t do calculations well nor know the difference between compound interest and multiplication, he said.

But that doesn’t mean there isn’t an effective role for such tools.

jeremy gilbert medill gabriel kahn usc
Jeremy Gilbert, Knight Professor in Digital Media Strategy at Northwestern-Medill, left, and Gabriel Kahn, professor and director of USC’s Future of Journalism at the Annenberg Innovation Lab. (Courtesy of Northwester-Medill University, USC)

BBC News Labs, a collaborative unit for media innovation and product development, is working on a project that would allow media to use a large language model to help update the description paragraph at the top of each topic page, Gilbert said. Such pages include all the stories a company has written on topics such as immigration, politics and climate change.

The AI system would look at the latest news story and all the news stories that came before it from that company on that topic and recommend an update if the system determines it’s needed. It would then go to an editor to see if the suggested change makes sense.

“It is a great example of where these tools are really helpful in conjunction with humans,” Gilbert said. “I’m highly skeptical that we can use these kinds of tools to replace reporters.”

When ChatGPT burst onto the scene late last year, it captured society’s imagination, said Sisi Wei, editor-in-chief at The Markup, a nonprofit news publication focused on the impact of technology on society. Virtually every big tech company is now rushing to get out its own version.

sisi wei the markup ai media
Sisi Wei, the editor-in-chief of The Markup. (Courtesy of The Markup)

But while this specific technology could be useful on an individual level, say for creative writing or to put together a travel schedule, it’s not clear whether media companies can depend on it because of its current problems with accuracy, Wei said. In the same way that virtual reality headsets haven’t taken off because of the nausea problem in humans, among other issues, accuracy is a key problem for journalists using generative AI to write articles.

“Until you can really guarantee results there… it’s always going to be an experiment as opposed to something people are really going to adopt or they’ll adopt it and then have to issue these corrections and then everybody feels bad,” Wei said.

There’s also the risk that bad actors will make use of such technology for their own political or financial gain. Such actors include those who have been comfortable making up fake news stories about politics (remember Russia?) or those creating listicles that have little or no news value.

“There are lots of people who try to game the advertising system to make money off of something that looks like journalism,” Gilbert said.

Also Read:
The Wacky, Unhinged Bing Chatbot Is Still Good for Microsoft’s Business | PRO Insight

Another risk in using these tools is that people may be making financial or even health-related decisions based on inaccurate information presented as news and disseminated from chatbots that have blind spots. And once errors are disseminated widely, it often becomes difficult to correct the record.

New cases in the field of libel could mean “we’d have to understand if ‘A chatbot made me do it’ is a defensible position,’” Kahn said.

That’s why Kahn is advocating for industry-level standards around a new set of ethics, a commitment to transparency and broader and deeper media literacy education concerning where the information people are reading originated. (Not all media, including CNET initially, have used obvious disclaimers to note that a story was produced with AI.)

“We have laws around… pharmaceuticals that you’re allowed to prescribe to people and understanding what the side effects are and everything else and we simply don’t have that for social platforms, media, things like that,” Kahn said.

Also Read:
To Err Is Human – and That’s What Makes Human Creativity in the Age of AI Divine | PRO Insight

A helping robot hand

For about a decade, some news outlets have been successfully using natural language generation — AI programming that converts structured data into prose — to produce basic articles about earnings reports and sports games. Such systems can take away the menial tasks or commodity journalism that most journalists and media companies are happy to do away with.

Agencies like Bloomberg or the Associated Press take an earnings report, for example, run it through their machine and “out pops a sort of first-go article that allows them to be speedier, and in many cases, also more accurate than what the human can do,” Kahn said.

The AP began automatically generating corporate earnings stories in 2014 and has added automated game previews and recaps of some sports events, according to a spokeswoman. It also uses AI in the newsroom to manage information flows, produce audio-to-text transcriptions, help with text-to-text translations and to localize content among other things.

BuzzFeed’s CEO Jonah Peretti recently told TheWrap he’s not interested in producing “content farms” and wants to use AI for entertainment rather than for news. But Futurism revealed this week that the company is quietly publishing AI-generated travel guides that it argued “read like a proof of concept for replacing human writers.”

News media have also long used AI systems that help with transcription or article or photo recommendation and even spelling and grammar checking, which play an important role in any newsroom.

“No one calls autocomplete on their phone artificial intelligence, but it’s undoubtedly that and in many ways actually ironically, it’s very much like the large language models that we’re all so excited about right now” that predict what words one might use, Gilbert said.

While generative AI might never fit the bill as a real reporter, that doesn’t mean it can’t be used to help reporters and editors with a range of journalistic tasks.

Also Read:
Hollywood Turns the Page on the Metaverse – and Disney Just Got the Memo | Analysis

The Lost Coast Outpost in California’s Humboldt County, for example, is already using Dall-E, another generative AI offering from OpenAI to make images for stories they write.

“To me, that’s a great use of the technology,” Northwestern’s Gilbert said. “It doesn’t harm human jobs. They didn’t have any budget to make artwork to begin with. That’s the kind of thoughtful, clever thing that can be done that many large and small news organizations are not yet trying.”

Not everyone agrees with Gilbert. Wired recently prohibited the use of AI-generated images, citing in part the financial threat to photographers and illustrators. “Selling images to stock archives is how many working photographers make ends meet,” the publication’s editors wrote. “At least until generative AI companies develop a way to compensate the creators their tools rely on, we won’t use their images this way.” The primary exception is for stories where AI-generated art is the subject at hand.

Also Read:
BuzzFeed CEO Urges Digital Media Entrepreneurs to Find and Ride ‘the Wave That’s Forming’

While Wei said she doesn’t trust AI to generate articles for The Markup — or even for low-stakes writing like automated Little League game articles — she noted that it can be used as a good brainstorming partner. A reporter who’s considering writing about a certain topic can ask ChatGPT or another generative tool the five most important things to know about that topic.

While journalists need to keep in mind that not all the information churned out might be correct, the reporter may find one or two of the suggestions interesting and then ask for some well known experts on those topics.

“I think there’s a lot of researching that it can do in a very conversational kind of way and then it’s up to you to go validate that information and then actually go do your real reporting after that,” Wei said.

The Markup is currently exploring how it could use generative AI tools to hold companies accountable, she said. Just like with other tools, reporters are interested in how they can test entire systems, like a company’s policy or how a company treats certain types of content.

“How do we actually use it for interesting database reporting — that’s how we think about it,” Wei said.

Also Read:
Next Up for AI Chatbots? It’s All About How to Apply Them | PRO Insight

Journalism professors could also use generative AI to create several different articles on a certain topic and then assign students to fact check and edit those articles while evaluating their structure.

“You just expect it to be something a robot wrote right? … Your job is to turn it into the journalism or make it good journalism, and I think that teaches people a lot about that process of critical thinking,” Wei said.

Assuming the technology is better trained on journalistic output and accuracy, both Gilbert and Wei say they can envision a future in which generative AI is being used to create different versions of an article depending on the reader’s individual needs and preferences.

“There are clearly ways that journalism can’t be using AI because it cannot be depended on yet to act like a real journalist,” said Wei. “But that doesn’t mean there aren’t many exciting and extremely helpful ways that journalists can use AI as a part of the journalistic process.”

Based on the quotes a reporter has and the kind of story it wants to tell, for example, they could direct a large language model like GPT-4, the latest version of OpenAI’s artificial intelligence model, to churn out a longer or shorter version, a more linguistically complicated or simpler version tailored to individual users’ needs. This would enable often complex topics like technology and politics to become more accessible to people.

Also Read:
No Bylines for Bots: BuzzFeed CEO Says AI Won’t Replace Human Journalists

Moreover, first-party data publishers have about the topics people read and how quickly they read them on their smartphone or iPad could even help tailor the articles to users’ reading habits, Gilbert said.

“The thing that I am most excited about by far is the potential to move beyond mass media” with such tools, he said.

Balancing the ways AI can and should be used — without compromising journalistic accuracy or integrity — is perhaps the latest challenge facing newsrooms today.

At Northwestern University, an associate professor in communication studies and computer science recently launched the “Generative AI in the Newsroom Project” to collaboratively explore how and when journalists should use generative AI in news production.

Also Read:
How Hollywood’s Guilds Are Preparing for the Dangers – and Benefits – of AI

And in the meantime, news outlets are clearly continuing their experimentation. Following the CNET debacle, the site did a full audit of the AI-drafted articles and issued sometimes lengthy corrections on more than half the articles produced in that series. It also noted that it was temporarily halting the use of its AI-generated content until it feels confident the tool — and its editorial processes — “will prevent both human and AI errors.”

“The process may not always be easy or pretty, but we’re going to continue embracing it — and any new tech that we believe makes life better,” then-CNET editor in chief Connie Guglielmo wrote on the popular tech site in January. Shortly afterward, Guglielmo stepped down from her role to become an executive in charge of AI strategy at CNET parent Red Ventures.

This is Part Three of a series about AI and its impact on Hollywood. Keep reading WrapPRO for upcoming stories on AI and screenwriting, acting and production.

Read Part One: AI and the Rise of the Machines: Is Hollywood About to Be Overrun by Robots?
Read Part Two: How Hollywood’s Guilds Are Preparing for the Dangers – and Benefits – of AI

Also Read:
AI and the Rise of the Machines: Is Hollywood About to Be Overrun by Robots?