The invisible heist: How AI is stealing from newsrooms
As AI reshapes how people access and consume information, newsrooms face an existential crisis. AI systems are mining journalism’s value without returning credit or compensation, threatening the very foundations of public-interest reporting

Predictions about the end of journalism have emerged time and again. From the rise of the internet to the explosion of social media, each disruption seemed poised to bring the industry to its knees. Yet journalism survived. It adapted.
What it faces now, however, is a different kind of threat — not another medium, not another platform, but a system that consumes journalism's value without acknowledgement or return.
This is not some distant future. It is already happening. And the consequences are reshaping how people receive and interpret information.
The architecture of disappearance
When someone searches for information about a recent event, they increasingly receive summaries directly from tools like Google's AI Overview, without needing to visit the original reporting.
Data from Ahrefs, based on 300,000 keywords, shows that the presence of Google's AI Overview reduces click-through rates to news websites by over a third. Even major outlets such as The New York Times have seen their share of organic search traffic fall from 44% to just over 36% in only three years.
This is not merely a shift in traffic patterns. Search engines, which once served as conduits guiding readers towards publishers, are becoming destinations themselves. They summarise content, keep users within their own platforms, and reduce the incentive to visit the source. The $80 billion SEO market — built on the value of ranking high in search results — is now facing an existential threat.
As a venture capital firm, a16z recently noted, we are witnessing the rise of Generative Engine Optimisation (GEO), where visibility means being included directly in the AI-generated answer, rather than ranking highly on a search results page. This fundamentally alters the logic of content strategy and the function of traditional search engines.
While search engines once monetised their role by placing ads alongside results, AI models rely on subscription-based systems that have no incentive to drive traffic back to original publishers. If your content does not directly enhance the model's output, it may simply vanish from the information ecosystem.
Josh Miller, creator of the Arc browser and now working on Dia browser, argues that traditional browsers themselves are becoming obsolete. He suggests chat interfaces are already functioning like browsers: they search, read, generate, and respond.
So, the goal is not to replace web pages, but to create a hybrid model in which the browser and AI chat interface are seamlessly integrated. Instead of clicking through multiple sources, a user might simply ask Dia to "find me the latest news about renewable energy in Bangladesh and summarise the key policy changes".
This suggests that the entire pipeline of content consumption is being rewritten from the ground up.
The economics of invisibility
This transformation strikes at the heart of how journalism has sustained itself in the digital age. The model was simple: produce high-quality content, draw readers to your site, and monetise that attention through advertising or subscriptions. AI-powered answer engines short-circuit this process.
As researchers at a16z highlight, a new kind of brand strategy is emerging — one that considers not only public perception but how content is perceived by AI systems. If your reporting does not improve the model's output, you may disappear from public view entirely.
News Corp recently announced a $250 million licensing deal with OpenAI over five years, giving the AI company access to content from The Wall Street Journal and other properties. While this may appear to be a solution, such partnerships are rare and disproportionately favour large, well-established publishers.
For most media outlets — especially smaller, local organisations — the outlook is far more worrying. Their content is often scraped, processed and repackaged by AI systems without compensation, while the platforms that once delivered their audiences are now being replaced by AI interfaces that retain users within their own ecosystems.
The irony is clear: AI systems become smarter and more valuable by ingesting journalistic content, but their success threatens to dismantle the very financial foundations that enable journalism in the first place.
Responses and impacts
Some publishers have begun to fight back. A few have signed licensing agreements; others are blocking AI crawlers or exploring legal recourse. Some are experimenting with AI within their own platforms. AFP, for instance, has partnered with Mistral AI to give its conversational assistant, Le Chat, access to its news archive, aiming to provide timely and accurate responses rooted in verified reporting.
ProRata, a new startup, is working on a compensation model that tracks which sources contribute to an AI's output and distributes payments accordingly. Other efforts attempt to preserve the connection between the reader and the reporter by embedding visible citations in AI-generated summaries.
Yet these remain isolated and early-stage responses. The dominant AI companies have little incentive to redesign systems that already serve them well.
This shift also has cultural and cognitive consequences.
When information is delivered in pre-synthesised formats, users may lose the habit of contextual or critical reading. While AI-generated responses are fast and convenient, they often flatten complex narratives, omit nuance or introduce subtle errors.
After the recent plane crash in Ahmedabad, for example, Google's AI Overview incorrectly identified the aircraft as an Airbus when it was, in fact, a Boeing 787 Dreamliner. This error appeared at the top of the search results — likely the only version most users would see. Under older models, people might have clicked through to articles that issued corrections. Now, the AI's version becomes 'the' version.
What is next?
The path forward does not lie in rejecting AI altogether. That moment has passed. But there is an urgent need to design systems that acknowledge and support the sources of their information. AI must not replace but reinforce the relationship between journalism and the public.
Publishers need to act swiftly. They must learn from experimental models and adopt strategies that align with their own contexts. AI can assist in organising and distributing information, but the essential work of reporting, questioning, and verification must remain in human hands.
Audiences, too, must be equipped to understand what makes journalism essential. If information arrives fully summarised, readers may stop checking for context, sources or accuracy. Over time, this weakens the very habits that sustain public knowledge. Previous shifts — from print to digital — at least preserved a direct link between journalists and their readers. AI, however, risks severing that connection entirely.
This is not merely about the survival of newsrooms. It is about protecting a societal function upon which democracy depends. AI can deliver swift responses — but it cannot attend court proceedings, investigate corruption, or report with empathy and context.
The labour behind journalism must be acknowledged, supported and protected. If we allow that connection to erode, the architecture of public knowledge may begin to crumble. The moment to act is now, while we still have the means and the clarity to do so.

Usama Rafid is currently a researcher at the Press Institute of Bangladesh (PIB), where he analyses the intersection of media business and technology.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions and views of The Business Standard.