Transparency comes up a lot with respect to the use of AI in journalism. There are obvious reasons for this—journalism is all about bringing transparency to what happens in the world, after all—and AI is a new thing that many people (rightly) view with skepticism. But that desire for transparency brings an opportunity to improve audience trust, something that’s in short supply lately.
In fact, a recent report on the use of AI in news media from the Reuters Institute showed a pretty clear pattern of audiences’ trust declining the more AI was used in the journalistic process. Only 12% of people were comfortable with fully AI-generated content, increasing to 21% for mostly AI, 43% for mostly human, and a respectable (but, notably, not amazing) 62% for fully human content.
The data points to a fairly obvious takeaway that, if trust is your goal (which in journalism, it certainly is), you should use less AI, not more. But we’re actually seeing precisely the opposite trend: Newsrooms worldwide are ramping up AI operations, with most major outlets, including The New York Times, using it in their process. Still others are using it to assist in creating content. ESPN, Fortune, and CoinDesk are just three examples of major, respected outlets leveraging AI to help write their articles.
Flipping the skepticism of AI
What’s going on? Sure, there are industry pressures to incorporate AI, but the data suggests that you might sacrifice trust with your audience. That’s a difficult problem, but it can be mitigated by prioritizing transparency.
The data from the Reuters report creates a clear trend line, but it’s important to keep in mind the question was generic, asking about comfort levels regarding “AI- and human-led news,” and not about a specific use case. That’s why it’s important to provide a fuller understanding of what AI’s actually doing—say, sorting through hundreds of video transcripts to zero in on specific topics, or writing a first draft of “just the facts” that the reporter then scrutinizes and adds to—rather than just putting “AI-assisted” labels on things. That can mitigate the risk of losing trust somewhat, and this kind of transparency, done right, might even buttress it.
I thought about this when I recently built an AI project around my work. I host a podcast for The Media Copilot where I interview leaders in media, tech, and journalism every week. However, once I publish a podcast, it fades quickly. A new one comes along the following week, and although I capture specific insights in short clips and articles, those also don’t last long, and then that conversation—which is likely still relevant—is trapped in the past.
So I took every single podcast I’ve done and put them all in a single folder in Google NotebookLM. That tool applies AI to the folder’s contents so anyone can extract insights from it. If you have questions about the use of AI in media and journalism, just ask, and you’ll be able to hear what people like The Atlantic‘s Nicholas Thompson, Reuters’ Jane Barrett, and the AP’s Troy Thibodeaux think about it. And because it’s grounded in only the podcast transcripts (and not all the junk on the internet), the chance of the notebook making something up is very low.
The craft table of journalism
You can apply this idea to journalism more broadly. If you break down what a journalist does when creating a story, they typically gather things like research, interviews, specific documents, and the history of their reporting on a topic. In the process of writing, they curate the most important parts of that information, then apply their judgment—informed by experience and their target audience—to craft a story.
You might call that last part the reporter’s lens. But it’s really just one lens among many that someone could look through at the material. A person with a different background, priorities, and knowledge of the subject might want to apply a different lens. You can think of this as a variation of the idea of “content remixing,” except that idea is usually concerned with format. This is remixing for audience.
A podcast about all the latest news in AI, for example, might focus on the most popular headlines for a general audience, the biggest market-moving events for investors, or the most noteworthy technical advancements for developers. They might even focus on the same stories, just with different details called out and expanded on.
Beyond the audience opportunity, though, is one of trust. Many news consumers distrust what they see in the media today. If you drill down on many of the complaints, which are often about political bias, the issue is rarely about the underlying facts and more about the lens the reporter has put them through.
This is where AI tools like NotebookLM can serve as a kind of window into how journalists curate their information. By allowing a glimpse into the raw material—the interviews, the research, the unfiltered facts—readers might better understand how journalists arrive at their conclusions. It could demystify some of the process, making it less about “just trust us” and more about “here’s how we got here.”
Of course, not every story could or should get this kind of treatment. Journalists are often entrusted with confidential material and sources that require anonymity, so an open-door approach to the “raw material” of the story simply wouldn’t be possible. Redaction is an option, but that would likely sow even more doubt in the conspiracy-minded.
Making journalism interactive
But for some stories, AI could help create a new, more transparent kind of journalism—one that’s more interactive. Imagine if readers could use AI to navigate the same corpus of information and draw their own conclusions or even generate their own version of the story. Certainly, few readers will want to dive this deep, but for that curious minority, it could be a fascinating new layer. In a sense, it turns the journalist into a kind of information curator, where the reader gets to apply their own lens.
That feedback loop could have trust benefits for the journalist, too. By deconstructing the process this way, they might have a better understanding of their own lens: where they’re applying it, how it affects the story being told, and how other lenses change the picture. That perspective would inform how different audiences interpret their stories, which will hopefully lead to stronger stories.
In the end, we won’t know if this approach is helpful for trust until we try it. It’s an experiment in making journalism not just something you consume, but something you can interact with. And whether it’s an academic exercise or a new genre, it’s at least a step toward understanding how we shape the lenses that shape our news.