How AI is taking over local news
How AI is taking over local news
Homepage   /    sports   /    How AI is taking over local news

How AI is taking over local news

🕒︎ 2025-11-06

Copyright Fast Company

How AI is taking over local news

The most obvious use case for generative AI in editorial operations is to write copy. When ChatGPT lit the fuse on the current AI boom, it was its ability to crank out hundreds of comprehensible words almost instantly, on virtually any topic, that captured our imaginations. Hundreds of “ChatGPT wrote this article” think pieces resulted, and college essays haven’t been the same since. Neither has the media. In October, a report from AI analytics firm Graphite revealed that AI is now producing more articles than humans. And it’s not all content farms cranking out AI slop: A recent study from the University of Maryland examined over 1,500 newspapers in the U.S. and found that AI-generated copy constitutes about 9% of their output, on average. Even major publications like The New York Times and The Wall Street Journal appear to be publishing a minimal number of words that originated from a machine. I’ll come back to that, but the big takeaway from the study is that local newspapers—often thought to be the crucial foundation of free press, and still the most trusted arm of the media—are the largest producers of AI writing. Boone Newsmedia, which operates newspapers and other publications in 91 communities in the southeast, is a heavy user of synthetic content, with 20.9% of its articles detected as being partially or entirely written with AI. Subscribe to Media CoPilot . Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com Why local papers rely on AI Putting aside any default revulsion at AI content, this actually makes a lot of sense. Local news has been stripped down to the bone in recent years as reader attention has fragmented and advertising dollars have shrunk. A great deal of local papers have folded (more than 3,500 since 2005, according to Medill School of Journalism at Northwestern University), and those that remain have adopted other means to survive. In smaller markets, like my New Jersey town, it’s not uncommon for the community paper to republish press releases from local businesses. The fact is, writers cost money, and writing takes time. AI, of course, radically alters that reality: for a $20 a month ChatGPT subscription, you now have a lightning-fast robot writer, ready to tackle any subject. Many unscrupulous people treat this ability as their own room full of monkeys with typewriters, cranking out articles just to attract eyeballs—the definition of AI slop. But there’s a difference between slop and AI-generated copy written to inform, with the proper context, and edited by a journalist with the proper expertise. In a local news context, the use case for AI writing that’s most often cited is the lengthy school board meeting that, if covered, would take a reporter several hours of listening to transcripts, synthesizing, and contextualizing just to cover what happened. With AI, those hours compress to minutes, freeing up the reporter to write more unique and valuable stories. More likely, of course, is that the reporter no longer exists, and an editor or even a sole proprietor simply publishes as many pieces as they can that serve the community. And while it’s not the ideal, I don’t see what’s wrong with that from a utilitarian perspective. If the copy informs, a human has done a quality check, and the audience is engaging with it, what does it matter whether or not it came from a machine? AI mistakes hit different That said, when mistakes happen with AI content, they can undermine a publication’s integrity like nothing else. This past summer, when the Chicago Sun-Times published a list of hallucinated book titles as a summer reading list, it caused a national backlash. That’s because AI errors are in a different category—since AI lacks human judgment and experience, it makes mistakes a human never would. That’s the main reason using AI in copy is a risky business, but safeguards are possible. For starters, you can train editors to catch the mistakes that are unique to AI. Robust fact-checking is obvious, and using grounded tools like Google’s NotebookLM can greatly reduce the chance of hallucinations. Besides factual errors, though, AI writing has many telltale quirks (repeated sentence structures, dashes, “let’s delve . . .,” etc.). I call these “slop indicators,” and, while they’re not disastrous, their continued presence in copy is a subtle signal to readers that they should question what they’re reading. Editors should stamp them out. Which is not to say publications shouldn’t be transparent about the use of AI in their content. They absolutely should. In fact, I’d argue being as detailed as possible about the AI’s role at both the article level and in overall strategy is crucial in maintaining trust with an audience. Most editorial “scandals” over AI articles blew up because the copy was presented as human-written (think about Sports Illustrated‘s fake writers from two years ago). When the publication is upfront about the use of AI, such as ESPN’s write-ups of certain sports games, it’s increasingly a non-event. advertisement Which is why it’s confusing that some major publications seem to be publishing AI copy without disclosing its presence. The study claims that AI copy is showing up in some national outlets, including the New York Times, the Washington Post, and The Wall Street Journal. This appears to be a similar, if smaller scale, issue as the Sun-Times incident: Almost all of the instances were in opinion pieces from third parties, though it appears to be happening around 4–5% of the time. That suggests third parties are using AI in their writing process without telling the publication. In all likelihood, they’re not aware of the outlet’s AI policy, and their writing contracts may be ambiguous. However, it’s not like the rest of the content was totally immune from AI writing; the study revealed it to be present 0.71% of the time. Getting ahead of AI problems All of this speaks to the point about transparency: be straight with your audience and your staff about what’s allowed, and you’ll save yourself headaches later. Of course, policies are only effective with enforcement. With AI text becoming more common and more sophisticated, having effective ways of detecting and dealing with it is a key pillar of maintaining integrity. And dealing with it doesn’t necessarily mean forbidding it. The reality is AI text is here, growing, and not going away. The truism about AI that’s often cited—that today is the worst it will ever be—goes double for its writing ability, as that is at the core of what large language models do. Of course, you can bet there will be train wrecks over AI writing in the future, but they won’t be about who’s using AI to write. They’ll be about who’s doing it irresponsibly. Subscribe to Media CoPilot . Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com

Guess You Like