
Underneath virtually every second post on X these days comes the inevitable crowd-sourced, AI-based fact check; “@Grok is this true?”
Just 30 percent of Americans say they “trust the news,” down from 32 percent last year and well down on the 55 percent that said they had a “great deal of confidence” in the news back in 1997.
So if the social masses are turning to AI to help fact-check, will it restore trust in the news?
Sam Altman has his doubts.
“People have a very high degree of trust in ChatGPT, which is interesting because, like, AI hallucinates,” Altman said during a podcast with technologist Andrew Mayne. “It should be the tech that you don’t trust that much.”
He’s right.
Recent studies by the BBC found that 51 percent of AI responses to news-related questions contained significant issues, the tip of a much deeper iceberg affecting how AI companies gather and process information. AI systems routinely present false financial data, fabricate market trends and confidently describe events that never happened.
For example, Google’s AI overview feature recently hallucinated Minnesota solar company, Wolf River Electric, into an active lawsuit with the state’s attorney general that they have no part in, damaging the company’s reputation and resulting in a lawsuit seeking more than $100 million in damages.
It’s one thing to hallucinate in the research lab, but AI hallucinations are impacting consumer perception and business outcomes. We need to think long-term about how to solve AI’s news problem — before Americans can no longer distinguish fact from fiction.
An AI “hallucination” occurs when large language models incorrectly calculate the most likely next word in a response, often cascading into a series of errors presented with the same confidence as verified facts. Sometimes, it even comes with falsified sources.
The problem extends beyond simple factual errors and “harmless mishaps,” as Google classified the Wolf River incident. AI systems can inject editorial bias into supposedly objective reporting, altering the tone and meaning of market analysis without any transparency or accountability.
For an AI-assisted cover letter or annual report, it’s a simple mistake that can be glossed over. When it’s a factual error about a breaking news story, shared online and viewed by millions, an AI hallucination can quickly become accepted as fact.
The urgency becomes clear when you consider who’s consuming this information. According to the Reuters Institute’s 2025 Digital News Report, 15 percent of people under 25 use AI chatbots for news weekly, with overall usage at 7 percent and growing by the day. This demographic — tomorrow’s executives, investors and decision-makers — is being shaped by information systems that don’t prioritize accuracy.
The tech industry’s response has been less than sufficient. Companies tout their AI capabilities while burying disclaimers about accuracy in fine print. But solving hallucinations in the current generation of large language models is near impossible.
The solution isn’t to abandon AI entirely. We just need to rethink how systems are deployed. We need to abandon AI business models that prioritize keeping users engaged with their platforms over supplying reliable information.
Engineering the models to respond to news questions with verified, human-created content will eliminate hallucinations. That data doesn’t exist, right? Wrong. Among thousands of hours of cable news network reports, local news broadcasts and rolling coverage is the answer to virtually any news prompts you can muster.
What’s the weather in Dallas today?
Did Trump cheat on the golf course?
Show me the Yankees’ home runs from the past month.
Tech giants won’t stall progress for the necessary regulatory frameworks that treat AI-generated news with the same scrutiny as traditional financial journalism. However, local and national news broadcasts are well within their best interests to deploy robust fact-checking mechanisms that solve for AI’s hallucination problem. And when we marry broadcast’s journalistic integrity with AI’s speed, we can start rebuilding trust in news, converting skeptics and even scaling a go-to tech platform (or a few) that consumers can rely on.
Considering how AI chatbots pull answers from a variety of sources outside of legacy publications, the best-case scenario is that this information gets editorialized. But a more likely reality is that misinformation surfaced by these bots continues to fuel our already volatile media landscape.
While the algorithms will continue to improve, trust in the news is something that can’t be manufactured. We need to design models that respond to news questions with verified, human-created content — it may be our final chance to restore faith in an industry that badly needs it.
Cam Price is co-founder and CEO of LeadStory, a personalized news streaming platform.