Uncommon carriers: should social media be regulated?

Over the past year, conservatives in the US have been fighting an ‘anti-censorship battle’ with social media platforms, as a reaction to posts and accounts being taken down for expressing hardline conservative views that violate community standards. New laws in Florida and Texas have made headlines, and the argument has now reached the Supreme Court.

The legal challenge seems to hinge on a category definition that feels very quaint to this Millennial: “are social media platforms like Twitter and Facebook more like a newspaper or a telephone network?” Currently, social media enforcement of community standards (and their ability to take down posts and de-platform certain ex-Presidents) is analogous to the editorial function of a newspaper: the New York Times is free to express a viewpoint, and isn’t obliged to publish anything it doesn’t agree with. Verizon, on the other hand, is regulated as a ‘common carrier’ and is required to transmit a telephone call regardless of its political leaning.

Republicans’ argument that large social media companies are common carriers will probably be found to be invalid in the Supreme Court. But it’s not totally outlandish:

  • Like much ‘common carrier’ infrastructure, social media experiences network effects: the value of a social media platform increases with the number of connected users. This can lead to monopolistic ‘winner-takes-all’ situations, which it might be appropriate to regulate.

  • Unlike a newspaper, most of the content on social media is posted for free, by the audience itself.

At first glance, then, Twitter is simply a digital abstraction of a town square: what could be more neutral and noble than holding space for free exchange of ideas?

The fact is, of course, that while Twitter and Facebook have elements of a pure network platform and a pure publisher, they are categorically different from both. The media industry spent a painful 15 years adapting to this reality, but lawmakers and regulators, it seems, are still in 2008.

One thing we are still only beginning to grapple with is the fact that social media platforms, as mediators of our relationships - and thus our very identities - can have real-world impacts at an individual and societal level. For me this comes into sharpest focus when you consider how the platforms are monetized, and how the technology combines with that revenue model to produce new emergent behaviours. Here is a very high-level summary:

  • Social platforms use an algorithm to prioritise content for users

The sheer amount of stuff posted on these platforms (even by just your friends) is generally more than you can ever read. Displaying posts in order of when they were posted is actually not a good user experience. To make the platform something you actually want to interact with, the posts you see, and the order you see them, are managed by an automated process. This algorithm learns over time by tracking your behaviour on the platform, and gets better at putting things in front of you that produce whatever result it’s been told to achieve.

  • The company makes money from advertising.

Ad revenue enables the platform to be free to users. With low barriers to adoption, you can get large user numbers quickly, and access those all-important network effects. As a user you’ll see a few ads and paid posts as your ‘price’ of entry, and some of them might actually be relevant to you. But here’s the thing: the algorithm will learn your behaviours and preferences, and prioritise posts you find ‘engaging’, to increase your time on the site, and thus ad revenue.

  • Outrage is good for business

As any Twitter user knows, taking the middle ground doesn’t get much engagement (unless you’re funny or famous). If you want likes and shares, you need to be opinionated and pithy - but this can easily turn into ‘extreme and reductive’. On Facebook, community group chats are more hidden from each other, but the feed is still subject to these addicting and polarising forces. The pursuit of advertising revenue on fast-moving platforms is a huge accelerating factor for political polarisation.

  • If you’re not paying, you are the product (and so is your demographic)

Once the algorithm has you pegged as a believer in flat-earth theory (or a Hilary skeptic), you can be targeted by ads with the potential to further manipulate your behaviour. We saw this in 2016, where targeted political advertising likely affected the outcome of the Brexit and Trump campaigns by a few percentage points. (There is also the much harder-to-quantify impact of bots and paid actors, which definitely also played a role.)

For the curious, I’d recommend The System by James Ball as a primer on the above.

So while advertising has enabled social media to grow fast, it has also created incentives to distort our interactions and relationships with others.

At the level of the individual, we see widespread social media addiction and radicalisation, leading to vicious culture wars and real-life tragedy. People now spend an average of 2hr30 daily on some form of social media (and tracking cookies enable targeted ads for as long as we’re online) and an increasing percentage of people get their news and form their views from social media. Social media algorithms curate our sense of reality, and can distort it.

At the societal level, bad actors, bots and targeted political advertising threaten democracy itself, and thus national security. (Things are also going to start moving faster, and will get weirder, as both the targeting and the content of ads can now be heavily automated.)

We can’t put the genie back in the bottle, nor would we want to. These platforms are popular in part because they are valuable. They are a part of our society and something like them will continue to exist for a long time. Increasingly, they will incorporate other everyday functions like payments, and there is the potential for some to become so socially valuable that they start to look like essential citizen services.

Beyond the current political scuffles, then, there is a strong case for smart regulation of social media platforms as businesses. Is it enough to incentivise transparency? What metrics could show that social media is doing more good to society than harm?*

*ps. There’s a deeper question still, beyond that, which is still science fiction but worth considering. We know that an algorithm that curates our social and informational environment can influence us towards certain behaviours. Over time this can perhaps lead us to develop certain attributes or qualities, such as being more curious or caring. Such an algorithm would have a power somewhere between a politician, a parent and a god. In whose hands should this power reside?