Boiling frogs and corporate power: how we got into the collective trap of social media and how to jump out again

A collective trap is a coordination problem, not a Tragedy of the Commons, and that is what democratic politics is there to solve. Are we really in a collective trap with actually existing social media - I say "yes" - and how do we now get out of it?

Tony Curzon Price, 7/7/24

Les influencers d'antan ... Stars of Hollywood's golden era were paid to promote smoking Tobacco companies shelled out millions to performers for their endorsements. (Thanks Jerusalem Post for the image)

There is a very striking paper by Bursztyn et al, "When Product Markets Become Collective Traps: The Case of Social Media" that has been featured on a Freakonomics podcast and an FT column by Tim Harford.

The experiment the authors carry out involves asking college students three questions:

  1. How much would you need to be paid to give up TikTok or Instagram for a month?
  2. How much would you need to be paid if some large-ish percentage of your main social group were also giving it up for a month?
  3. How much would you need to be paid if everyone were to give it up for a month?
The answers, on average, were $50, $30 and … wait for it … minus $30.

What does this mean? The interpretation the authors give, which seems to me valid, is the following:

If no one apart from me gives up social media, I would need to be paid $50 per month to stay off it; if quite a few of the people I interact with are off it, then it is about $20 less painful for me to also get off it (this is the most obvious “network effect” piece of the value) … but if everyone is off it, then I would be prepared to pay to be off it. What's more, the amount I'd be prepared to pay to have a world without social media is of the same order of magnitude as the value of social media to me when everyone else is using it too.

Hence the “collective trap” of the title. On average, we'd prefer for all of us together to be off it … but each of us individually prefers to be on it given everyone else is…

This is a powerful result (I am taking as basically OK the generalisation in the interpretation) - it speaks to a strong case for collective action to limit social media as we know it. But what sort of collective action problem is it? The question is important to understand both what kinds of solutions it requires and also what solving it will feel like.

When I first read it, I thought this was a typical “Tragedy of the Commons” - a situation in which we get into a mess individually through an inability to act collectively. That tragedy is often represented in classic game theory terms as a Prisoner's Dilemma:

This is meant to represent the choice between cooperation and defection in a situation in which we'd all together be better off if we cooperated, but in which there is an individual incentive to act selfishly and to defect. The iconic case of this is managing a common resource, like the CO2 stock in the atmosphere or some fish stocks. The first number in the box is my benefit from the course of action, and the second is yours. If we both cooperate, we each benefit to the tune of 50 (bottom right box); however, if you cooperate and I defect, I can benefit to the tune of 60 (top right box) … so clearly I am tempted to defect. You are in exactly the same situation, so we both end up defecting, where each of us disbenefits to the tune of -10. Hence the sense that this is the pure form of a collective action problem. We could all exercise restraint and, say, enjoy plentiful fish stocks… but each of us over-fishes, and we end up with no fish at all. In some sense, the heart of the tragedy of the commons is that collective restraint gives each of us 50, while I can fall to temptation and get 60 … at great cost to you.

The equivalent to “cooperate” and “defect”, I thought, would be “I use/don't use social media” vs “everyone else uses/does not use social media”. But it turns out that the collective trap does not look like a tragedy of the commons. What we have instead is in some ways much more interesting from a public policy perspective. The payoffs from the setup described by Brusztyn et al is more like this:

Let me explain. If others are using social media, then I am faced with the choice in the first column: to use or not to use. I am comparing the $50 payment I need to stop using social media unilaterally to the $30 that Bursztyn reports that I would need to be paid if others in my social network were not using. They interpret the $20 difference as the “network effect value” - the additional benefit I get from, say, Instagram, from the fact that I know the people I can follow on Instagram. So if my social group is using, then the network effect pushes me to use. If no others are using, on the other hand, I am in column 2 and am comparing the $30 benefit of using the social media (imagine it as a pure 3rd party content delivery network in that case) to $80. This is the amount that makes me $30 better off than using SM along with everyone else; since the surveyed students on average were willing to pay that amount to have everyone off social media, $80 is the total benefit from coordinating in that way.

The key problematic dynamic in this game is not temptation, but it is the network effect - Fear of Missing Out (FOMO). This is what sends the person who might unilaterally be doing the right thing (don't use) into the arms of the wrong camp (do use), and therefore why the collective optimum might be missed. We end up in the inferior place (top left, where everyone is using) because I compare the 30 to the 50: under the 30 payoff, I am missing out from the network. That is very different from the key driver of social suboptimality being temptation, where in the tragedy of the commons I was comparing the 50 to the 60 - I was tempted to shift opportunistically to something even better: exploiting the common resource while everyone else held back. Here, there is no temptation - if we are in the good place (bottom right), I compare 80 to 30 and I am not tempted to go to 30. And if I am in the bad place, (top left), I compare 50 to 30, and I am not tempted to move away from the bad place.

Tim Harford, in his excellent FT piece on collective traps, chooses to bundle the two types of problem into a single one in his SUV example:

Perhaps, but perhaps more plausibly, they are worried about being hit by a car at all. If you're going to be hit by a Nissan Micra, you'll be safer in a Range Rover than in a Nissan Micra. So if everyone else drives Micras, I am tempted to “defect” to a Range Rover for my own safety, though this is a selfish decision which makes life worse for others. This is a tragedy of the commons, and leads to a ratcheting up in the size of cars that people drive. But this is not the Bursztyn-style collective trap, because in this case there really is a temptation to go bigger when everyone else is smaller. Not so in the pure collective trap.

Tim Harford ends his column with his “inner libertarian” pulling him to order:

It seems to me it is worth asking “why not ban social media/Range Rovers, etc”? Does the inner libertarian really have a say at this point? There is a great deal of political philosophy - and politics in practice - that suggests that the FOMO dynamic makes the public policy problem much more tractable than the temptation dynamic. The most sophisticated treatment of Social Contract theory that I know is Ken Binmore's magisterial 2-volume Game Theory and the Social Contract, which argues that social norms and political arrangements exist to assist equilibrium selection in games with multiple equilibria. Now, the Tragedy of the Commons game (as depicted above) has a single dominant strategy - to defect - which is what makes it a tragedy. But the collective trap is a problem of multiple equilibria: whatever happens, you want to do what everyone else is doing … but you'd much rather no one used social media. A crucial point that Binmore makes over and over is that in games of coordination, the first thing you want is a rule. The worst outcome is no rule at all.

Games of coordination are domains where the inner libertarian should recognise that individual freedom has a negative value, and where what is at stake instead is the group's ability to positively do more and better things - call it positive freedom, or Arendt's sphere of action. Crucially, once the libertarian recognises this, there is no conflict between negative and positive freedom, because negative freedom has no value in that context. No libertarian seriously asks for the freedom to drive on the side of the road that they individually choose. So games of coordination are the ones that are most easily dealt with by norms or by law.

All very well in theory, you might say, but a social media ban - or other rules that make it less of a collective negative, like regulation of FOMO-inducing tricks like infinite scroll, photo filters, etc - would certainly feel like a limitation of the freedom of those who have an unproblematic use of it. And these sorts of policies are certainly being resisted by social media companies, for example in the lobbying which diluted the Online Harms Bill (see Beeban Kidron's wonderful LSE lecture, Tech Tantrums: When tech meets humanity).

Part of the disconnect between the theoretical account and actual policy is that Bursztyn et al interpret the average valuations of the different configurations of use and non-use as being the same as the valuation of “the average citizen” (or university student, in their case). But people are irreducibly plural: some would pay a lot to be rid of social media (for society as a whole), some would pay a lot - and be perfectly genuine and informed - to have it widely available. We simply have different conceptions of the good life … and unlike purely private goods, our preferences are a function of the kind of society we want to be. So, while the “which side of the road shall I drive on” game does not arouse passions, the “ban social media, ban Range Rovers” game, despite results like Bursztyn et al's, does.

However, what the result does show is that collective traps create a proper domain for collective choice: some part of the collective whole, one way or another, is going to have to give up their ideal conception of the good life in respect of the collective trap. There is a feel of this in the famous Schrems judgement on the EU's data safe harbour for data transfers to the US. Schrems, an Austrian student at the time he started the case, made the argument - though it was not central to his main case - that it was not possible to have a full and complete social life as a student of his generation without Facebook access, and that therefore the privacy terms offered by Facebook were tantamount to a forced choice. This is an expression of the “bad equilibrium” - “don't ask me to live without a social life, but please can we agree to organise our sociality differently”.

When you see a collective trap as a coordination failure, you might naturally ask how we ever got into it: isn't it obvious, after all, that there should be a standardised side of the road on which to drive? If we can see that it is better both individually and collectively to not use, how do we end up in the bad corner? There is a tendency for this train of thought to lead to general scepticism that we are in fact in the situation as described…

I think there are two answers in the specific case of social media: boiling frogs, and corporate power. The water was warm when the frog first jumped in. It felt like an improvement. It is only slowly that it got uncomfortably hot. So it was with social media - at first, what a joy it seemed: keeping in touch with friends, walking around with your very own imagined community, seeing that community grow, etc. But then FOMO, social comparison and time-wasting took over, partly out of product design and partly because everyone being on social media changed the society that it reflected. Bursztyn et al claim it as one of the benefits of their Collective Trap hypothesis that it does not need any addiction narrative to conclude that social media is bad. That is true, but I think that the addiction hypothesis is very helpful in explaining the actual dynamics of how we got into the bad equilibrium. In fact, there is something of the collective trap in that older addictive scourge, smoking: once addicted, you hung out with the smokers … and if you were trying to quit, you used to wish that all your friends weren't smokers too.

But perhaps there is a bigger point of similarity with tobacco: corporate power. The story of Big Tobacco's relationship with the truth is now well-established. The medical and scientific establishment had established the causal link between smoking and lung cancer pretty conclusively by 1950, though evidence had been building since the 1930s. Tobacco executives from the Big 6 met in secret in 1953 in New York to formulate their counter-attack. They decided that a simple denial would not be enough to allay public fears. Instead, they would create the illusion of scientific controversy by promoting the work of sceptical scientists. (This is a familiar tactic, of course, one picked up by Big Oil, by anti-vaxxers and by Putin) The first concrete result of this decades-long campaign was a publication aimed at the smokers of the world, “A Frank Statement to All Cigarette Smokers,” which contained the heart-warming comfort that: "We accept an interest in people's health as a basic responsibility, paramount to every other consideration in our business [...] We always have and always will cooperate with those whose task it is to safeguard the public health [...] We believe the products we make are not injurious to health."

The parallels between the tactics of Big Tobacco and Facebook were drawn out in Elizabeth Haugen's testimony to congress. She is the ex-Facebook product manager who blew the whistle on the News Feed, and she says that “the documents I have provided to Congress prove that Facebook has repeatedly misled the public about what its own research reveals about the safety of children, the efficacy of its artificial intelligence systems and its role in spreading divisive and extreme messages.” She asks for transparency, for the ability of researchers to go under the hood and check what is happening. The UK's Online Safety Bill gives Ofcom some powers to do this … but they are severely circumscribed to the harms already named in the act… The bill is a very welcome start, but remember the scale of what had to be done to reduce smoking.

So what is to be done?

I will take it as given that both the addiction results and the Burzstyn et al collective trap are features of actually existing social media. What is the way forward? Here is what I think we should do in the UK:
  1. Strengthen the Online Harms Bill by a) requiring product safety researchers unrestricted access to Facebook data and the ability to conduct (appropriately approved) experiments on its platforms and b) requiring that new social media products - and expect GenAI to produce much that is new and untested here - be subject to pre-release approval, as with ordinary product regulation
  2. Enshrine in law each citizen's right to control of their own attention, and allow parents and those in loco parentis control of minors' attention rights - give this right effect by allowing any citizen the right and the means to reshape the material offered them by the social networks, or to delegate to some legitimate institution of their choosing that right
  3. Permit every citizen to have real time access to all the data that the social media platforms have & use on them, so that the platforms cannot exercise asymmetric data power over users, power that could easily be abused to keep the feed more addictive than healthier alternatives.

The idea behind these policy changes is that once we know how the platforms are keeping us in the bad equilibrium, and once we have the tools to shift to a better one without superhuman collective effort, then we may be able to find our way to the healthy social media that Haugen points towards.

However, there is no guarantee that these reforms on their own will solve the Burzstyn problem. As with Tobacco, we may need some stronger pushing. One idea I would like to see explored is to have public broadcasters create that good social media that seems tantalisingly within reach. If the BBC is to be true to its mission to “Inform, Educate, Entertain”, then surely it should go to where so much of this is happening today and show us all how it should really be done.