Who wrote that headline? Maybe a robot.
Who wrote that headline? Maybe a robot.
Homepage   /    business   /    Who wrote that headline? Maybe a robot.

Who wrote that headline? Maybe a robot.

🕒︎ 2025-11-07

Copyright The Boston Globe

Who wrote that headline? Maybe a robot.

He and his team turned to an AI tool, Digital Democracy, which tracks every word uttered in California legislative sessions, every donation and every vote taken. It led to an article, and an Emmy-winning segment on CBS, that revealed that Democratic lawmakers had killed a popular fentanyl bill by not voting at all. “I don’t think I could have done that without this database,” Sabalow said. Artificial intelligence is sweeping through newsrooms, transforming the way journalists around the world gather and disseminate information. Traditional news organizations increasingly use tools from companies like OpenAI and Google to streamline work that used to take hours: sifting through reams of information, tracking down sources and suggesting headlines. In some cases, including at Fortune and Business Insider, publications have explored using AI to write full articles, notifying readers that they intend to use it for drafts. Almost all of the news organizations have some guardrails in place to prevent errors, such as requiring a human to review anything that AI writes before it is published. But some embarrassing errors have appeared nonetheless, including from top publications such as Bloomberg, Business Insider and Wired. And many journalists have also been left to wonder: Will AI replace journalism jobs in an already fast-shrinking market — or, rather, which jobs? “AI is an extraordinary tool for journalists,” said Stephen Adler, a former editor-in-chief of Reuters who now runs the Ethics and Journalism Initiative at New York University. “It excels at analyzing large datasets, organizing notes, checking spelling and grammar, even pointing out possible flaws in a story. But, as with much of technology, it comes with significant risks.” The stakes are incredibly high for the news industry. Over the past several decades, media executives watched as the internet upended their business, laying waste to classified advertising and siphoning away readers to social media. And many have come to realize that they were flat-footed in the face of the technology transformation, giving away news content in the hopes of clawing back some digital advertising revenue. The executives are eager to not make the same mistake with AI. They are trying to force tech giants to pay for the original content used to train and service the large language models, either through commercial agreements or lawsuits — or both. (The New York Times, which has an AI licensing deal with Amazon, has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.) Proponents of AI in newsrooms say that regardless of the business implications, the technology is a powerful new tool to aid with reporting and editing and engage readers. And they are racing to figure out how to take advantage of that. Newsquest, a British newspaper chain owned by USA Today Co., employs more than 30 journalists who use AI to delve deeper into stories. Axel Springer, the Berlin-based owner of Politico and Business Insider, recently used AI to create an interactive travel planner. Time magazine used an AI-powered chatbot for its 2024 Person of the Year featuring President Donald Trump. The Times has a team that experiments with AI and builds reporting tools. But there has also been some pushback within news organizations. This year, a staff engineer for The Washington Post raised concerns about a new tool under development that would summarize and aggregate news articles from publications across the internet, according to two people familiar with the matter. A lawyer for the Post also weighed in, saying it could be a violation of intellectual property rights, the people said. Vineet Khosla, the Post’s chief technology officer, pushed back against the engineer, who later quit and cited the potential unethical use of AI. The Post eventually released a different AI aggregator that helps run a scrolling marquee of headlines. A spokesperson for the Post said in a statement that the discussion had taken place in a “proof-of-concept” meeting, noting that those conversations “continuously evolve.” At Bloomberg, an experiment with AI to generate news article summaries has resulted in dozens of corrections. A note appended to one, an Aug. 1 article about Switzerland’s reaction to Trump’s tariffs, said: “A faulty AI summary was removed for misrepresenting surplus as a deficit.” And the publication removed a summary on a Sept. 22 feature about private equity because it had misattributed a quote. Bloomberg has said that 99% of AI summaries met the outlet’s editorial standards and that journalists had full control over whether a summary appeared. “We’re transparent when stories are updated or corrected, and when AI has been used,” the company said in a statement. Newsroom unions have channeled many journalists’ concerns about AI replacing them. The NewsGuild, a labor union for journalists, has worked on 48 collective bargaining agreements since late 2023 involving AI in some way, whether around job security or guardrails for its use, said Jon Schleuss, the organization’s president. (The NewsGuild represents some workers at the Times and at Wirecutter, a product recommendation website owned by the Times.) “We actually need something that’s legally enforceable since there are no regulations on it, but you can regulate it through collective bargaining,” Schleuss said. Soul-searching over the use of AI in the newsroom is far from over, with new debates continuing to break out. Last week, NPR’s leadership proposed testing the use of AI to produce digital versions of radio stories, to make the middle of the editorial process more efficient. The network’s guidelines say that NPR content will always be “the product of human beings.” Connor Donevan, a producer, raised questions about the proposal. “We are the middle,” he said, according to a person who heard his comment. “The middle involves journalistic choices.”

Guess You Like