
Last week, Google, the world’s most blatant multiple-times-over monopoly, evaded meaningful enforcement in its general search services U.S. antitrust trial. There will be no breakup, for now, of the internet’s arthritic corporate backbone, just some minor behavioral remedies and data-sharing. (Here’s an analysis if you want the weeds.) Still pending is the imposition of remedies for Google’s adtech monopoly.
I was struck how the non-outcome in Google’s search antitrust case was justified with two interlocking yet seemingly contradictory reasons:
Google now faces serious competition in the field of generative AI. “The emergence of GenAI changed the course of this case,” U.S. District Judge Amit Mehta wrote.
Google is too big to fail. Barring it from paying other companies to load its products on everything would hurt the payoffees. “Cutting off payments from Google almost certainly will impose substantial — in some cases, crippling — downstream harms to distribution partners, related markets, and consumers, which counsels against a broad payment ban.”
Here we have the paradoxical conclusion that Google is simultaneously not big enough to justify breaking apart and yet too big to punish. For those of us who publish on the open web that Google still has a monopoly in searching, well, there’s not much good to say here.
If you were hoping a Google breakup might open up market space for a competitor that prioritizes quality referral traffic for writing — maybe in the same way that Bluesky’s friendliness to hyperlinking carved market openings for some new indie publishers — I don’t see that happening. While Google very much remains one-of-a-kind entity in search services, and still the dominant source of third-party referral traffic, the days of the ten blue links are long dead and not coming back.
Judge Mehta is correct that, from a user-side perspective, more real competitive action is happening in the generative AI space. But these are very different businesses performing very different functions. Part of the whole value proposition of generative AI is to make the AI a publisher writing its own content rather than like the Google of yore, a platform connecting the seekers of information directly to its providers. The real capitalism 101 crisis happening here might not be that Google subtracts referral traffic to zero, but that no other AI competitor might want to replace that traffic. Consumer surveys say the top use of AI is for companionship. The modal user in 2025 might not want an article, but a therapist or a girlfriend. I’m not in that business.
The bad news is that it’s far less profitable writing for a machine rather than a human audience, according to the newest market rate: Anthropic’s recent settlement with authors offered $3,000 per book. To make minimum wage from that in California, I’d need to write a dozen books for AI a year.
Judge Mehta was pretty harsh in writing that publishers weren’t justified in receiving special treatment in the remedies for Google’s monopoly. He did state the problem well, to quote him at length (citations cleaned up):
Plaintiffs seek to address an increasingly existential problem faced by publishers and digital content creators: diminishing traffic to their websites. GenAI products use online content both to train and fine-tune their LLMs and, through RAG-enabled search, to improve the relevancy and accuracy of their responses to user queries. Because those responses typically consist of comprehensive narrative summaries that synthesize information from multiple sources rather than an assortment of individual links, users are navigating to publishers’ websites less often than through traditional search. Publishers consequently are seeing less traffic on their websites, resulting in reduced monetization and revenue.
With Google specifically, publishers are caught between a rock and a hard place. Because publishers rely heavily on Google to drive traffic to their sites, they have little choice but to allow Google to crawl their content for inclusion in Google’s search index. Publishers, however, might want to deny Google permission to use its content to train and appear in its GenAI offerings, like AI Overviews, unless compensated. But Google does not offer such full optionality…
So, say, a publisher did not want Google to display its content in AI Overviews. It could accomplish that under Google Extended only by opting out of being crawled altogether. But that is not a tenable choice, as it may mean the publisher’s absence from Google’s search index and its non-appearance on the SERP, which is critical to directing user traffic to their site.
The court declines to adopt either of Plaintiffs’ proposed publisher-related remedies. … The court heard evidence about Google’s opt-out offerings, but no testimony from a single publisher. The court does not doubt that publishers face new challenges because of GenAI technologies, but there can be no cure without evidence to support it. In any event, the conduct and proposed remedy fall well outside the scope of these proceedings. It was Google’s contractual arrangements with GSE distributors, not website publishers, that gave rise to liability under the Sherman Act.
Well, shit.
Here’s what I see can grow in the marketplace going forward: 1. A handful of premium international titles with wealthy subscriber bases, 2. charismatic platform hucksters and/or Joe Rogan patronage network recipients, 3. a bevy of small producers who carve out hyperspecific niches for superfans and whose output serves as training for the AI oligopoly that serves the real masses in exchange for, I dunno, $3,000 for a year’s work if you’re lucky? The world we know as “publishing” may perfectly well conclude that the age of mechanical reproduction was a lot of fun but these days it’s easier to be an events business and a community lifestyle brand.
That leaves us with AI companies perhaps hoping there can be higher-quality outputs from ever-lower-quality inputs. But with every AI company arguing that unrepentant scraping is total and absolute fair use, the legal leverage to secure meaningful incentives for human production seems uncertain. The Anthropic settlement only came after it had illegally downloaded and stored millions of copyrighted books.
There are, in my mind, two plausible paths for fair compensation to incentivize human beings to put in the necessary training work to make AI materially viable: collective bargaining for creators at the sectoral level, or taxation of the tech takers and subsidization of the human makers, or some combination of both. Either requires pretty sophisticated legislative action. The courts, although fully capable of identifying and spelling out the problem, may not be of a mind to help create a solution.
I see a much simpler solution. Nationalize Search and put librarians in charge of it. Information is as much a human need as health care, and it's absurd to let the oligarchs take profit from it. And librarians index information for a living. They're the professionals at it. Make the coders subordinate to them.
That judge: "Plaintiffs seek to address an increasingly existential problem faced by publishers and digital content creators: diminishing traffic to their websites." What? My traffic is growing, not diminishing. What's he talkin about