Artificial Intelligence (AI) is rapidly changing how information is delivered and how people consume it. AI summaries—such as Google’s “AI overviews”—are becoming a normalised way for audiences to find information. However, these summaries often misrepresent facts and, critically, divert readers away from original news sources.
For news organisations, this shift has serious consequences. AI systems can access and repackage content—including paywalled material—into short, convenient summaries. From a reader’s perspective, there is often no need, or time, to click through to the original story or seek out primary sources. As a result, the value of an individual news article is steadily eroded.
News organisations now face a growing challenge: how to create value in a “zero-click” world.
Trust in news in the age of AI
In response to this challenge, many newsrooms are turning to AI themselves to reduce costs and streamline workflows. Ironically, this has contributed to increasing scepticism among audiences.
Research from the Digital News Report: Australia shows that while many people believe AI-produced news is cheaper and more up to date, they also see it as less trustworthy, less accurate and less transparent than journalism produced entirely by humans.
Audience interviews reveal deeper concerns. Beyond accuracy, people worry about algorithmic bias—the risk that AI systems amplify existing prejudices embedded in data, prompts or design choices.
As one participant noted, while humans try to recognise and manage their biases, AI can heighten those biases depending on how questions are framed and information is processed.
This uncertainty compounds an already crowded and confusing information environment. Many audiences feel overwhelmed by misinformation and lack confidence in their ability to identify quality journalism. In such a setting, people often cannot tell whether a story has been written by a journalist, generated by AI, or shaped by both.
Unless news organisations fundamentally rethink their approach, individual articles risk becoming disposable—just one of hundreds of inputs used by AI systems to generate personalised content for consumers.
The role of the human in journalism
What AI cannot replace is trust. Journalism’s role as a place people turn to when they need reliable, contextual and accountable information still depends on human judgement.
Research shows audiences remain most comfortable with journalism produced by human reporters, even when AI is used as a supporting tool. Acceptance drops sharply when AI takes the lead and human oversight is minimal. These concerns are closely tied to limited understanding of how AI works, particularly when it comes to cultural context and handling sensitive or complex topics.
Media literacy plays a significant role. People who have received some form of news or media literacy education are far more accepting of AI-assisted journalism. They are also more confident navigating today’s information environment and discerning credible sources.
Transparency is therefore essential. Some news organisations publicly explain how AI is used in their newsrooms. Others—particularly small and medium-sized outlets—have not disclosed their practices, and many may lack formal AI policies altogether. This lack of clarity is compounded by research showing audiences often cannot tell whether or how AI has been used in the journalism they consume.
Changing audience preferences
Despite these concerns, generative AI is becoming increasingly popular as a way to access news, especially among younger audiences. Many express interest in personalised news summaries, alerts and recommendations tailored to their interests.
Convenience and relevance are key drivers. Younger audiences are particularly interested in using AI to make news easier to understand. They are far more likely to support tools that simplify language or summarise longer stories.
The future of AI adoption in journalism will depend on how effectively news organisations balance innovation with trust—while investing in transparency, media literacy and the human skills that technology cannot replicate.
Editor’s note
The media industry is under growing pressure from declining advertising revenue, rising production costs and the spread of misinformation—particularly across social media and private messaging platforms. The emergence of AI-generated summaries and “zero-click” news consumption adds a further challenge, reducing traffic to original reporting and weakening the economic foundations of journalism.
In this environment, trust, cultural context and editorial judgement remain central to journalism’s public value. While AI tools can assist with reporting and distribution, credibility still depends on human accountability, local knowledge and transparent editorial decision-making.
As AI becomes more embedded in how news is produced and consumed, media organisations, policymakers and educators will need to prioritise transparency and media literacy to ensure journalism continues to serve the public interest.
Authors
Professor Sora Park
University of Canberra
Dr TJ Thomson
RMIT University
Editors
Andrew Jaspan
Editor-in-Chief, 360info
Namita Kohli
Commissioning Editor, 360info
This article is part of a series on AI, Journalism and Democracy.
Originally published under Creative Commons by 360info™.






