AI is Already in the Newsroom

AI newsroom AI newsroom.jpg - Credit: Ai Studio/Adobe Stock
AI newsroom AI newsroom.jpg - Credit: Ai Studio/Adobe Stock

For journalists who work — or used to work — at G/O Media, the disclaimer at the bottom of numerous articles on the company’s e-commerce site, The Inventory, offers an unsettling vision of the future: “This article was generated by an AI engine which may produce inaccurate information.”

That warning appears on every article credited to “The Inventory Bot,” a highly productive virtual employee with its byline on 10 articles published on December 14th alone. “It’s absolutely the most inane way to announce that your article is trash, like pure garbage,” says a former G/O writer who lost their job in a recent round of human-employee layoffs. “There’s no thought process going on behind this except, ‘we want as many articles on our site as possible as quickly as possible. And we don’t give a shit about the content.'” (A G/O spokesperson emphasizes that The Inventory “is an e-commerce site, separate and apart from our other editorial properties.”) The former G/O writer doesn’t blame the advent of AI for his own layoff, but there have been some job losses at the company due to automation: The entire staff of Gizmodo en Español was replaced by a years-old translation tool in August, according to the spokesperson.

More from Rolling Stone

G/O Media at least labels its robo-content. A more venerable brand, Sports Illustrated, became the face of AI-journalism calamity in recent weeks after publishing what were reported to be AI-written articles attributed to nonexistent people, complete with AI-generated headshots. (The articles came from a third-party company, Advon Commerce. SI’s parent company, Arena Group, says the vendor told them the articles were human-written, albeit with fake bylines and photos. The outlet Futurism, which broke the news about the articles, quoted a newsroom source insisting at least some of the text was AI-generated, as well.) Four Arena Group executives, including the CEO, Ross B. Levinsohn, were fired after the scandal, though an Arena Group spokesperson says the timing was coincidental and related to ownership changes.

“Journalism is going to change more in the next three or five years than it has in the last 30 years.”

David Caswell, StoryFlow Ltd. founder

The true possibilities of generative AI began to reveal themselves to the world beyond Silicon Valley in 2023, leaving many industries either bracing for disruption, jumping on the technology as rapidly as possible, or both. The business of news is no exception, with multiple outlets embarrassing themselves this year by using AI tools they either didn’t prompt correctly or weren’t yet up to the tasks at hand: CNET found itself forced to correct errors in over half of 77 AI-generated articles they posted; Microsoft’s MSN highlighted fake news stories after abandoning human curators for an AI-powered homepage; Gannett had to stop using an automated news-writing company, LedeAI, that kept inserting the phrase “close encounter of the athletic kind” into high-school sports stories.

“These tools have a tendency to mess up, to get facts wrong, to hallucinate, to spread misinformation, even when they think it’s right,” says Jack Brewster, enterprise editor for the watchguard organization Newsguard.

Generative AI is “not quite there yet to be an autopilot,” adds Brad Weitz, CEO of Data Skrive , which has been using human-guided machine-learning technology to generate hundreds of sports stories a week for outlets including the Associated Press, ESPN, and USA Today since 2018 or so — without causing any significant controversy.  Similarly, outlets including the AP and Bloomberg have been utilizing pre-ChatGPT AI tools for years to generate straightforward stories on earnings reports and stock market status updates. “It’s not going to replace humans for an extended period of time,” Weitz says. “We look at it as, can we use this type of technology to make humans more efficient?”

Still, the micro-scandals of 2023 are just a few stray drops of water from an impending flood that may well reshape an entire industry. “Journalism is going to change more in the next three or five years than it has in the last 30 years,” argues David Caswell, formerly of the BBC and Yahoo!, and now founder of StoryFlow Ltd., a AI-in-news consulting firm. Caswell’s extensive experience is on the technology side of the news business, rather than editorial, and his current line of work offers obvious incentive to predict radical change, but he’s far from alone in his assessment. Near-instantaneous automated rewrites of news stories by competing sites and newly AI-equipped search engines alike could cause job losses and potentially devastating decreases in traffic and profits; the proliferation of AI news sites indistinguishable from real ones, sometimes with completely fake stories, seems fated to further erode public trust in journalism – and even any remaining societal sense of shared truths.

In the past month, developments have begun to speed up. The New York Times hired an editorial director of Artificial Intelligence Initiatives. German news giant Axel Springer (owner of U.S. publications Politico and Business Insider) struck a deal with ChatGPT creators OpenAI that will allow ChatGPT to offer “real-time news” when answering users’ questions. Semafor spotted Investing.com allegedly rewriting competitors’ news articles with AI and without credit — one of many recent developments that augur an era where AI-powered sites routinely rewrite human-reported scoops. (A spokesperson for Investing.com says the company is “working on and implementing an AI product to assist its editorial team… Investing.com has access to a wide array of financial instruments and is often generating its data from the same sources as other publishers in the industry. It is transparent and responsible in its development and use of AI, clearly specifying when it is deployed to assist its content team.” )

A startup called Channel1.ai debuted a proof-of-concept episode of a newscast starring AI-generated anchors. Apple reportedly opened talks with a long list of major publishers, offering massive payouts in exchange for using their content to train their AI systems. And Wall Street reiterated its bullishness about AI’s future, with J.P. Morgan excitedly predicting “mass-scale white collar job realignment” that will impact “content creators” — which could become a self-fulfilling prophecy as investors in media properties push for AI adoption.

Perhaps even more consequentially, big tech’s own use of AI loomed as an ominous external threat to publishers, with new forms of search preventing readers from even reaching their sites. Multiple executives at major publishers told the Wall Street Journal last week that the AI-powered search that Google is slowly rolling out could devastate traffic by drawing from their content to answer users’ questions – without directing them to external links. Even as Axel Springer struck its deal with OpenAI, its chairman and CEO, Mathias Döpfner, told the WSJ that “AI and large language models have the potential to destroy journalism and media brands as we know them.”

LIKE MANY NEWS OUTLETS, Sports Illustrated is under significant financial pressure, which likely left management willing to “try anything,” according to a source familiar with the publication and the broader media business. “I know the tenor of the feeling of the newsroom is that Arena doesn’t care about the quality of the content,” he says. “If it takes pennies to produce and it makes you a quarter, then it’s a return, right?” (An Arena Group spokesperson pointed back to its statement denying the use of AI.)

At G/O media, the former writer recalls the first AI-generated articles appearing — without human editorial input — on the company’s sites within a week of a memo announcing that the company would be experimenting with the technology. “We saw a couple of AI articles in our back end… and then they got published without really any of us knowing,” the writer says. “It was very, very fast. We didn’t look at it. We didn’t get a chance. We didn’t edit it. We didn’t touch it. And almost as soon as it went up, we realized just how bad it was. And our editor-in-chief was like, ‘please send all the corrections you have,’ and it took us 30 minutes to an hour for all of us collectively to nitpick this absurd AI-generated article that none of us had any say in.” (The G/O Media spokesperson claims the writers’ union refused to help with the company’s AI experiments, and says the first article the company posted was “just a list about Star Wars” on Gizmodo.)

Justin Harvey, co-founder and CEO of the startup Infobot.AI, which aims to provide custom AI-powered newsfeeds to readers, sees existing news organizations as mostly focused on using AI to cut costs — and, potentially, workers. “They’re trying to take their existing newsroom and just expand the margins, like, ‘How do we get rid of some of the people? How do we make it more efficient?’ In his mind, that gives startups like his own more room to pioneer newsgathering innovations, like using AI to examine and find newsworthy material in public records — everything from city council meetings to S.E.C. filings — at a scale no human could match. (At the same time, Harvey admits he has no idea yet how the company will turn its product into profits.) One of the hottest startups in the space is Instagram founder Kevin Systrom’s app Artifact, which uses AI as a headline-recommendation tool, as well as  — more boldly — to re-write “clickbait” headlines.

Some smaller news sites have also found room to be innovative. Rappler, based in the Philippines, won an international AI-journalism competition in November thanks to TLDR, a project that used AI to reformat their news stories into shorter and more visually driven alternate formats to appeal to younger and more text-averse users. As Caswell sees it, that kind of easy reformatting of outlets’ reporting is one of the most promising uses of AI for news organizations.”It’s not a completely automated thing,” says Caswell, who coached the company as part of the competition. “But it vastly steps up the scale at which they can do it.”

Other small outlets have popped up that appear to be purely exploitative, and they’re likely just the beginning. Newsguard has identified more than 600 “unreliable AI-generated news and information websites” this year, some of which publish entirely false articles to garner traffic, including erroneous obituaries for celebrities.  

The number of such sites will almost certainly “grow exponentially over the next few years,” says Newsguard’s Brewster. “We’ll see thousands by next year. I mean, what’s stopping a Super PAC from using artificial intelligence to start its own website that looks like local news and generates content that’s favorable for whatever political candidate it’s backing? It just lowers the barrier to entry for spreading misinformation. You have the power of thousands of writers at your disposal.”

It’s also probable that higher-profile, better-funded news sites will pop up to take advantage of large language models’ skill at summarization and paraphrasing to near-instantly grab breaking news stories reported by human journalists elsewhere and post aggregated versions — which, if done at large enough scale, could cannibalize the traffic and profits that fund actual newsgathering. “Someone’s going to just come out and say, ‘All right, fine, I’m just going to create a completely 100 percent AI-driven news site and have zero editors or anything on staff,” predicts Eddie Kim, founder of the news-metrics site Memo.

Such a site could release “a paraphrased version of an article with just the information that’s in the article, two minutes after their competitor publishes their scoop,” says Caswell. “It’s morally reprehensible, but it’s legal because you can’t copyright the facts… You pay a lot of money as a news organization for your reporters and your infrastructure to do original reporting. And then, five minutes after it gets published, it’s used as raw material, without payment for a consumer experience somewhere else. These are major, major, structural changes.” The kind of AI-powered search Google is starting to use is alarming publishers because it presents exactly the same problem, without even involving another news brand. If customers get used to learning about the news by asking a chatbot or search engine questions and getting customized answers, the roles of news organizations and journalists seem destined to change dramatically.

At the same time, some existing news organizations may end up turning over some of their own aggregation to AI. The question is whether that will free up staff to do more original work, or if — in an ever-brutal business — that simply means eventually replacing some human workers with automation. There are some signs that at least top organizations will choose the former road. Bloomberg, the Associated Press, and more recently, the Wall Street Journal’s long standing use of pre-ChatGPT tools has expanded their output seemingly without negative impact on their human staffing.

Says the former G/O writer, “If you were basically able to tell writers, ‘You don’t have to do aggregated content anymore. We have an AI bot to take care of that. Instead, you guys should focus on voicey pieces, reported pieces and investigations,’ I think that would be that positive in a lot of ways.”

AI CAN REWRITE TEXT INSTANTLY and absorb and summarize documents faster than any human, but it can’t yet jump on a phone call or Zoom for even the most basic interviews, can’t work a network of sources, can’t trail a subject for an in-depth feature. “Things that are truly investigative, or things that require someone to speak to primary sources” are likely insulated for now from AI-driven change, says Kim. That said, news organizations that fund that kind of work also rely on scale and profits partially generated by aggregated work that may become harder to monetize. And outlets “whose current product is largely built on packaging commodity information” are likely to struggle the most in the coming world, as Caswell recently wrote.

Even Channel1, the company planning custom newscasts with AI anchors, recognizes the limits  and dangers of journalism without human intervention. “At a base level, this technology is terrifying,” says Channel 1 co-founder Adam Mosam, acknowledging the risks of totally convincing footage of nonexistent news anchors. “We’re hoping to get out in front of this inevitable future and create best practices for the world that we’re moving toward.”

For Channel1, only a portion of news scripts will be AI-written, with the rest provided through partnerships (they expect to announce one next month with a major news organization), and their own human staff. “We are AI native, as opposed to 100 percent AI,” says Mosam. “And the difference is we have people at every stage of the newsroom.”

For now, LLMs struggle with some basic journalistic tasks — try to get OpenAI’s GPT-4 to edit a transcript of an interview into a clean Q+A, and you’ll find that it’s quietly rewritten quotes, or simply invented new ones. Some researchers argue that a certain amount of “hallucination” — as the AI’s inventions are known — will always be necessary to allow for creativity in these models. But it’s clear that drawing any conclusions from current limitations is a mistake.

“People generally overestimate what the system can do now and underestimate what it can do in 10 years,” says Infobot’s Harvey. “People probably overestimate what it can do now by 20% or something, and they underestimate what it’s gonna be able to do in the future by 10,000 percent. But probably most people can’t even begin to grapple with the concept of where this is going. ”

Best of Rolling Stone