Gannett, which has over 200 outlets, including USA Today, pauses AI-written articles, reports Axios.
The largest publisher in the US has stopped using Artificial Intelligence to write articles after being mocked on social media for lacking important context, information, and flair.
In one AI-produced articles on sports news, the team names failed to generate entirely, with the opening line reading, “The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]].”
Since the backlash, the stories have been updated to include the fact that they have been written by AI.
In July, the publisher had announced plans to include work written by generative artificial intelligence. Using Lede AI, Gannett was experimenting with writing mostly sports stories. Its plans included using both humans and technology simultaneously for oversight.
“The desire to go fast was a mistake for some of the other news services,” said Renn Turiano, vice president of Gannett, in an interview with Reuters. “We’re not making that mistake.”
A former reporter for Gannett’s The Indianapolis Star believes that the AI rollout was floundered.
“Like most industries, journalism is trying to figure out how to fit AI into its workflow,” said Alisson Carter to Doha News. “Unlike some other industries, they’re often doing it in a very public way where failure is immediately evident.”
Gannett is far from the only publication that has walked back plans to use AI. The tech publication CNET scrapped its AI writing tools earlier this year after making numerous mistakes and errors in at least half of their stories. Gizmodo had a couple of hiccups as well.
“Missteps like this can damage trust, far outweighing the incremental benefit of a small high school sports story,” Carter adds. “Some may point to it to support their belief that news can’t be trusted; even supporters of local news may be disillusioned and wonder why they’d spend subscription dollars to support a robot.”
Why AI?
While AI provides fast and easy-to-produce content to various news productions, it comes at a time when trust and credibility in media have declined.
Readers often comment whether what they are consuming is fact or opinion and whether what they are reading is worth their time and energy. However, the apparent lack of quality of these pieces is becoming an indicator that this technology – at least at this stage – is not ready.
Other concerns raised by publishers and journalists remain that tech companies hold too much power by training their models and essentially attempting to augment/replace the need for journalistic writing.
While AI companies have been clear against AI fear-mongering, major websites like Amazon and the New York Times are increasingly blocking OpenAI’s web crawler GPTBot.
Where do publishers stand?
Most publishers have recently issued guidelines to explain to what extent AI can be leveraged in the newsroom.
“Any output from a generative AI tool should be treated as unvetted source material,” the AP guideline says.
The AP maintains it will not edit or publish multimedia or written content, however, it will use generative AI images labelled for illustrative purposes.
However, AP licenses its content to OpenAI for its training models.
The Guardian says AI should be used “with clear evidence of a specific benefit, human oversight, and the explicit permission of a senior editor,” labelling generative AI as “exciting but unreliable.”
On the other hands, Reuters says it is open to using AI to publish any content while the BBC is one of the front runners in using hyperlocalised AI-generated news.
However, NYTimes has prohibited the use of AI both for publishing content and for training models.