AI-generated articles have no place on the web: Wikipedia and Google consider complacent publications generally unreliable in seismic shift

 A robot reading a newspaper.
A robot reading a newspaper.

What you need to know

  • With AI's rapid emergence and adoption, several publications quickly integrated the technology into their workflow and even laid off some of their staff.

  • AI can generate news articles in a split second but with little regard for accuracy, attributions, or even grammar.

  • CNET is one of the publications that used the technology to write some of its articles.

  • This strategy has negatively impacted its rating on Wikipedia, from being a trustworthy and reliable source to generally unreliable.


AI-generated articles may appear to be a convenient solution for publications, but they threaten their reputation, credibility, and search engine ranking. This is what happened to Red Ventures' CNET, a tech news and reviews site (via Futurism).

In 2022, the publication started using AI to generate some articles. But this hype was seemingly short-lived after some articles were spotted with serious grammar issues and plagiarism. This effect prompted the publication to hit pause on the AI-generated articles, but the extent of the damage done was already too huge.

As a result, this negatively impacted the publication's ranking on Wikipedia as a reliable source of information. Wikipedia has a page dubbed Reliable Sources/Perennial Sources, where it lists credible and reliable news sources.

The news about CNET using AI to generate some of its content spread like wildfire. Consequently, this triggers a broad discussion among Wikipedia's editors regarding the issue on the Reliable Sources project page.

Wikipedia editors discussed the publication's new project that leveraged AI to generate articles. One of the editors mentioned that the project wasn't running smoothly as some pieces were rife with grammatical errors and inaccurate information.

We know that AI encounters its fair share of setbacks and challenges. Remember when an AI-generated article recommended a food bank as a tourist attraction or another bizarre article referred to a deceased NBA player as useless?

At the time, the publication was regarded as a reliable source of information, but this project had placed it in a bad light. The editors categorically indicated that they were actively looking for some AI-generated articles that might have mistakenly made their way to Wikipedia to remove them.

CNET's ratings have taken a significant blow since then, as highlighted in Wikipedia's Perennial Sources list. Wikipedia has broken down its credibility into three tiers. The first tier represents the publication's ratings before October, when it was ranked as a "generally reliable" source.

CNET's ranking and ratings on Wikipedia
CNET's ranking and ratings on Wikipedia

The second tier represents the state of the site between October 2020 and now. Wikipedia also notes that during this period, Red Ventures acquired CNET, further indicating that it started witnessing "a deterioration in editorial standards" on the site. This observation ultimately attracted a "no consensus about reliability" rating.

Finally, the third tier ranks CNET as "generally unreliable." Wikipedia attributed this to the website's use of AI to generate articles featuring inaccurate information and many affiliate links.

While speaking to Futurism, a CNET spokesman indicated:

"CNET is the world's largest provider of unbiased tech-focused news and advice. We have been trusted for nearly 30 years because of our rigorous editorial and product review standards. It is important to clarify that CNET is not actively using AI to create new content. While we have no specific plans to restart, any future initiatives would follow our public AI policy."

What's the cost of publishing AI-generated articles on the internet?

A robot reading through content for AI-generated text
A robot reading through content for AI-generated text

While Microsoft's Work Trend Index Report indicated only 49% of the participants who took part in the survey highlighted that they were concerned about AI taking over their jobs, AI has potentially rendered some professions redundant and obsolete already. This result includes architectural, design, and coding jobs. NVIDIA CEO Jensen Huang recently indicated that coding might not be a viable career option for the future generation. He added that it's best to leave the sector to AI and explore lucrative opportunities elsewhere.

Publications weren't left out of the fray either. With the broad availability of AI-powered chatbots like Microsoft Copilot and ChatGPT, many websites were quick to lay off some of their employees. Microsoft recently announced that it has started a new initiative that is in place to prepare journalists for future newsrooms with AI. This initiative includes equipping them with skills and best practices leveraging the tool to improve their workflow.

AI is significantly fast when writing articles and searching for information across the web. But if the instances highlighted above are anything to go by, AI-generated articles often feature critical issues around accuracy. This week, a journalist pinpointed a similar instance while trying to leverage Microsoft Copilot's capabilities to write a news article about Russian opposition leader Alexei Navalny's recent demise. The chatbot blatantly linked President Biden and President Putin to the fake press statements.