Who really pays to produce information?
Journalists as honeybees in a collapsing information ecosystem.

When I was a staff writer at the Los Angeles Times, me and many of my colleagues unionized the newsroom nearly a decade ago because we were deeply unhappy with our management and our compensation and wanted to improve our lot in life. We addressed many of our problems, but in the end, not the very worst ones. A few years later, after I became president of our NewsGuild-CWA local union, Media Guild of the West, which soon came to represent journalists at more than a dozen other newsrooms, I learned how similarly unhappy many of my fellow journalists were across many different types of newsrooms. What we thought were problems of a workplace were really problems of a workforce across an entire industry, not even just in the U.S., but globally.
Always the reporter, I looked for the sources of the discontent. I learned there was a lot of mythologizing about why bad things happened in the news industry, much of which seemed so specific as to be anecdotal, or so broadly declarative as to be broadly unsupportable by empirical evidence. Accordingly, these myths, like all myths, had poor powers of prediction for when or why bad things would happen again. All of this set me on a journey to look for more powerful explanations for what was happening to all of us, a course that led me in the following years to economics.
To state the problem simply, the incentives to produce high-quality information for public consumption are awful. And they’re getting worse with the explosion of AI. This is a bad deal when research shows less quality news is associated with less attentive voters, more wasteful government budgets, and more loneliness. Anecdotally, it also seems to lead to more dictators.
Human attention is limited, and the market for that scarce attention is the most competitive market we have. In that market, news has been falling into greater and greater disadvantage relative to other types of content: Consumers struggle to differentiate between true and untruthful information, while lies keep getting cheaper and cheaper to produce than the truth; tech platforms are literally addictive and disrupting the consumer visits that news producers need to generate revenue (whether via advertising, subscriptions or to justify philanthropic giving); and fair use legal regimens make it difficult for information producers to get fair payment by increasingly popular AI models, let alone paid in referral traffic.
A new NBER paper from economists Joseph E. Stiglitz and Maxim Ventura-Bolet, “The Impact of AI and Digital Platforms on the Information Ecosystem,” presented at the Saving Journalism conference this month in Paris, likened the introduction of AI to the information ecosystem as a “drone war” with consumers likelier to be casualties than beneficiaries:
AI has an ambiguous effect on the proportion of informed/uninformed consumers (ω). On the one hand, AI can be used by producers to make untruthful information harder to detect. On the other hand, AI can also be used by consumers to improve detection of untruthful information. We refer to these conflicting effects of AI on ω as the “drone war effect”. Which force predominates? Although this is an open question, there are reasons to believe that the the first force dominates. It is undeniable that AI is helping producers create untruthful information (creation of malicious bots, propagation of “fake news”, targeted and personalized news feed algorithms...). However, the extent to which AI is currently helping consumers detect untruthful information is unclear. AI-intermediaries can hallucinate, and more importantly, when users search for information they are not constantly verifying everything they consume with AI. So, we presume that the effect of AI on ω is likely negative.
But if and when AI tools improve — in the positive scenario for AI accuracy — Stiglitz and Ventura-Bolet warned that some of their equilibrium modeling showed the conditions for an “information collapse” scenario:
At the current state of AI, hallucinations and inaccuracies serve as a natural brake on user substitution away from primary sources. Because users cannot fully rely on AI outputs, they continue to engage with information producers preserving the incentives for information production. However, as AI intermediaries become increasingly accurate and contextually fluent, with the ability to automatically check and verify the original sources to which human actors refer, the need to consult original sources diminishes. In the limit, if AI becomes perfectly reliable-or reliable enough and all information consumption is mediated by such systems, incentives to produce new information collapse. No one invests in producing accurate information when their work is instantly absorbed and intermediated by an AI that captures all downstream attention and value.
To analogize the AI information-collapse scenario: It’s like if the modern information economy was a bee farm, where journalists are the honeybees. (We do sting people from time to time). Information is honey, and society loves honey! You don’t even need to meet a bee to eat honey. But making honey is so slow… Then a new AI-powered robot arrives that makes it a lot cheaper and faster to get the honey to consumers — by driving over the flowers, smashing open the beehive and killing all the bees in the process. There’s no more honey to collect next time, and the robot is left to scrape up whatever is left in the debris. One day you get to the grocery store, and crud is the only thing on the shelves. Suddenly dinner is less appetizing but you don’t know why. Oh, and society has gotten addicted to riding around on the bee-smashing machine.
Here’s a real-life, concrete example of the AI free-rider problem for news producers, demonstrated by Aengus Bridgman and Taylor Owen of the Centre for Media, Technology and Democracy in a new study:
When asked about Canadian news events drawn from their training data, ChatGPT, Gemini, Claude, and Grok provide no source attribution 82% of the time. When given web access and asked about specific recent articles, the same models covered enough of the original reporting to substitute for the source in 54 to 81% of cases. Models linked to Canadian news sites in 29 to 69% of responses, but named the originating outlet in the response text in only 1 to 16% of cases. When we named the outlet and asked the same models for citations, attribution rates reached 74–97%.
Not only that, Bridgman and Owen found the AI models had plundered paywalled news articles that humans would have to pay to read, but which they could read for free via AI summaries:
AI models covered paywalled content (64%) at rates comparable to free content (70%). For most stories, models find freely available versions elsewhere. In some cases, API logs showed models citing paywalled URLs directly with extensive verbatim reproduction, suggesting that paywalls may not block automated retrieval the way they block human readers. Either way, the result is the same: paywalled journalism is reproduced without compensation.
There are many existing and proposed subsidies that would offset the accelerating destruction of production incentives for quality news providers. In addition to longtime support for public broadcasters, some governments around the world and in the U.S. have implemented or are considering policies such as journalist employment tax credits or grants that lower the significant labor costs of quality news production. (I’m working on one of those proposals in California as we speak! Thank you Assemblymember Chris Ward for introducing AB 2222.) Even in a more stabilized information ecosystem, public support for news providers has been a key feature of modern democracies around the world for decades.
But research like that from Stiglitz, Ventura-Bolet, Bridgman and Owen show that direct subsidies of information producers are also indirect subsidies of the information free-riders in Big Tech whose business models are generating the urgency for additional subsidies in the first place. Everybody likes the idea of cheaper honey. It’ll be easier if we can also make the beehive-smashing robots a little less smashy.


Great piece. Thanks, Matt.
AI smashing the incentive structures of content creation—subscriptions/paywalls—is a call to innovate. Currently API keys are harder for AI agents to access without manual (presumably human) verification. Maybe the future of those incentive structures will resemble more like human firewalls, like when a newsletter resides in an inbox and not on the open web. Either way, I look forward to that daunting future. Great read! Lots to think about.