Law and censorship in social media

The Objective, June 19, 2022

In 1974, Nobel prize-winner Ronald Coase wrote that the market for ideas fails more than markets for other goods and services. Therefore, there are more reasons to regulate the market for ideas than other markets, even if in none of them is regulation necessarily preferable to free contracting. This Coasean discourse on the market for ideas fully applies to the failures observed in social media but also to the regulation recently agreed by European Union (EU) institutions.

The firms that run social media networks draw up their algorithms to maximize profits, without showing any concern for the negative externalities they cause in the shaping of public opinion. They therefore tolerate the lynching by tweet of those considered wrongdoers, with serious effects in the form of cancellation and self-censorship. They also amplify the codes of opinion and conduct of activist minorities. And, most seriously, they enshrine the content of such codes as social norms, charging them with emotional moralism while disregarding the checks and balances that should be applied to any democratic law before it is enacted.

Moreover, managers and workers in such firms often seem to be imposing their personal moral and ideological preferences, generally veering to the left, when designing the content of audio-visual platforms or drawing up the algorithms that moderate social media.

In theory, in view of such market failings, good regulation should bring us in line with the public good but, in fact, each country adopts very different strategies. China imposes exhaustive controls on Internet platforms. At the other extreme, the USA trusts in the self-regulation of private firms and in litigation between users and businesses, although there have been many regulatory proposals, such as the one formulated by Professor Jonathan Haidt (NYU) in an recent essay.

EU institutions have just agreed on a Digital Services Act. Some people consider it will mark a turning-point, but for how long? The EU has no relevant Internet platform, so its aspiration to becoming the main regulator of the Internet globally is based only on its self-appointed role as the “representative” of EU users. But such representation is dubious, and even the relative importance of such users in the long term depends on the actual competitiveness of the EU economy. It’s not possible to live on past glories forever.

Moreover, the Act contains many debatable points.

First, in view of the cost of centralized control (the plan is to hire 230 new civil servants, very few for the task in hand), legislators have opted for decentralized regulation based on the distribution of duties and rights among private agents. Not only will platforms have to comply with certain requisites. They will also have to ensure that both users and “trusted flaggers” can report illegal or improper content.

This decentralized but mandatory solution entails risks. Such compliance mechanisms—both auditors and private flaggers—will be competitive organizations so will tend to be efficient. But such efficiency will serve their private interests (whether lucrative or ideological), which will not necessarily be the same as the public interest or lead to neutral moderation of the networks. Moreover, the Act is yet again expanding normative compliance, a sector that is already showing many signs of parasitism, in line with the old European tradition of selling indulgences.

Second, obliging platforms to allow users to report “illicit” content may just reinforce the type of harassment that proliferates today. In this way, well-organized and well-subsidized activist minorities can become even more powerful, imposing their own conception of what society should treat as illegal. It basically submits the drafting of moral codes to a permanent referendum in which only the most active minorities will participate, a formula allowing extreme positions to take the upper hand.

Other possibilities should be considered regarding the design of algorithms, such as enabling networks to limit dissemination by means of likes and, above all, shares, and retweets, as recommended by Haidt in the above-mentioned essay. This would be in line with the limits on forwarding messages introduced by WhatsApp which, it claims, reduced “virality” by 70%. Platforms are often accused of not being interested in such limitations precisely because they reduce virality and, therefore, traffic. But this voluntary adoption by WhatsApp suggests that maximizing virality is not necessarily conducive to maximizing platform value.

Two other important changes are related to the proliferation of impersonator bots and, especially, to the decentralized enforcement systems that assess the reputation of complainants. The latter have already been adopted by platforms such as Airbnb, which assesses both parties to a contract. The fact that networks such as Twitter have apparently not yet done the same suggests that its cancellation decisions might be more related to the ideological preferences of its employees and managers than to the neutral opinions of its users.

An obvious exponent of self-regulation is Elon Musk with his desire to take control of Twitter. In addition to his preference for freedom of expression, his bid may well stem from the possibility that the current management of Twitter has been biased towards the left, in line with the preferences of its managers and workers, thus substantially reducing the firm’s market value. So, Musk’s takeover, in essence, may be aiming to restore the firm’s value.

Another example of spontaneous corrective mechanisms is Netflix’s decision to resist pressure to “cancel” comedian Dave Chappelle. (No less than 98% of Netflix workers support the Democratic Party). In particular, the case led Netflix to review its “cultural guidelines” and now warns its workers that they may have to work on titles that run counter to their personal values: “if you’d find it hard to support our content breadth, Netflix may not be the best place for you”.

Such initiatives and the fact that social media are so young suggest that regulation should proceed with caution. In general, networks should be allowed to draw up their own mechanisms for self-regulation. Otherwise, instead of preventing the main problems, regulation may just aggravate the risks. The regulatory opportunism of the UE, as has often been the case in the past with EU consumer law, is not at all promising.