How does enshittification come to LLMs and how to stop it?

We are in the golden phase of truly creative destruction with LLMs, but don't expect it to just continue forever. We all now know what comes next if we don't take active steps to stop the eternal recurrence. Here's why we should be truly scared, and what we should do about it.

Tony Curzon Price, 19/11/24

The Doctorovian enshittification dynamic has applied to all the major cyberspace developments to date, so the betting has to be it will come to LLMs also. Unless understanding the dynamic allows us to forestall it. This post is what I believe to be the minimum modest proposal to use the insights from the dynamic to reverse it. It is not too late.

So how does the enshittification dynamic actually go? These are its general phases:

  1. Great innovation for humanity attracts users who can’t believe such wonders are offered for free (or very cheap, in the old days), with minimum barriers to use
  2. Big user base creates network economies of scale that improve the product, eventually leading to winner-take-almost-all; the product becomes an essential part of modern living for users
  3. Shareholders chomp at the bit for returns, and so a business model is developed to sell the users to them - in most recent iterations, this has been via attention-harvesting; this is shittiness 1.0, with the product now starting to work against the interest of the original awe-struck users
  4. The small number of providers now compete for the paying side of the market, giving cash payers great ways to harvest more attention, more efficiently; the cash payers slowly build the product into an essential component of way of doing business
  5. We now hit shittiness 2.0 - the original intended users have an ever degraded service, as per the original dynamic; but now the paying side is also over a barrel: you can’t shift sneakers any more without complete reliance on Google, Meta & Amazon ad programmes
  6. The providers now have power over both sides of the market, and their shareholders can’t get enough: the price of the attention goes up; the quality of the good that originally captured the attention goes down; both sides are hooked on the platform and can’t really engineer their way out. We’ve now gone full enshittification.

There is no doubt that LLMs are in phase 1 of this process. They are amazing, and there is genuine competition between LLM providers to make better and better models for those who ought to be the ultimate beneficiaries.

So we’re enjoying our moment when the titans of tech are fighting each other for our benefit. But how do we drift onto the ladder to enshittification 1.0, and then 2.0? I think it is not so hard to see.

First, we need to identify the network effect. When that Google memo said “we have no moat”, I am pretty sure that what they really meant was “our existing moat does not help us much here, and the game for the next few years will be building a bigger moat than OpenAI’s”. It was not that “there is no moat in LLMs”. There is a blindingly obvious moat.
Let me explain. Think of LLMs as being the tools which, to a first approximation, have hoovered up the data commons of humanity, have encoded it into a really good database that has a natural language UX, and that are now accepting queries to that database. Creating a good natural language UX is all about spotting patterns in how we think, which is more or less the same as spotting those patterns in language. The question of the moat, then, is what are the important additions to humanity’s data once you have hoovered the commons? Of course, there is quality content, and lots of media properties have done deals to license that to the model-makers. But much more valuable in terms of understanding how we think, and therefore in terms of making the database we all want to interact with, is data showing how we actually think and respond in Q&A sessions. So sites like Reddit have suddenly become valuable and have withdrawn from the data commons and offered their content exclusively to Google.

But it’s much more than that … The really valuable conversational data comes from the interaction stream with the LLMs themselves. What is happening is that humanity’s knowledge commons has made some machines which add privatised streams to that commons, and that augments the value of the data commons. This is the network effect. If you can get more “thought stream” data than the next guy, your model will be better than the next guy’s. That is why Google, OpenAI and Amazon (via Anthropic) are battling it out and sinking all that cash. That is why Meta and X, who intrinsically have products that generate thought-stream data, are also in the running.

Remember, we are in the pre-shit phase. It feels like we are standing in a firehose of miracles. But just keep your nose alert - it is going to turn. One of the MAXOG’s will start to pull away, it will get into the virtuous feedback loop, and it will get deeply designed into our lives and the product flows of government, economy.

Once that happens, start sniffing the firehose: the shareholders will ask for payback. What will that look like? Finally, the logic of all that alignment research will become clear: the point will be to align the output of the winning LLM not to the good of humanity in general, but to the good of the highest bidder for every query. Yes … that will be, yet again, Nike and Coca Cola, it will be snake-oil salespeople of ensnaring cures to whatever ill you’ve revealed through your interactions lies deeply in your soul. And of course, since each interaction will now be sold to the highest bidder, the LLM-maker will have every incentive to use all the tricks of the trade to addict you to the model. That will be enshittification 1.0. Then, once the only path to mind control will be through the MAXOG behemoth, their shareholders will enshittify the platform for the other side of the market. That is the eternal recurrence we seem to be stuck in.

OK. Hope I’ve got you worried. Because it is not too late to sort this one out.

The moat is being built by the thought-stream, just as Google’s search and display advertising moat was built by the click-stream. But the thought-stream is yours and mine. It belongs, at least in part, to people and to institutions who are doing the input work and being rewarded with today’s LLM outputs. And if the whole commons had access, in some form, to those bits of the thought-stream that we consent can be returned to the commons (perhaps anonymised, pseudonymised, sometimes linked and sometimes not, etc), then the miracle of creative destruction that we are seeing today in the LLM model space would simply keep going. We’d never give anyone the opportunity to start on the road from here to Shit 1.0 and Shit 2.0.

Fine - there is no technological necessity of starting on the smelly, slippery slope. But how do we actually stop it - not in principle, but actually? How do we actually enforce the principle that if you take from the commons, it is your duty to enrich the commons?

Well, here we need to get a few things in place first, but they are not so hard to do:

  1. Join a data union whose values are aligned with yours, and which will aggregate and curate the thought-stream of members and release it back to consented members of the commons (ie you won’t be forced to share your thought stream with X’s Grok if you don’t want to)
  2. If there is no data union for you to join in your jurisdiction, create one.
  3. If the regulation in your jurisdiction would not give a union the powers it needs to act as needed to protect humanity's commons, activate to get the law changed.

OK …. As far as I know, “2” and "3" applies everywhere today. So the solution is clear: set up data unions; lobby government. The human commons depends on it.

More on all this soon.