Aleksandr Wansbrough is a cultural critic, social philosopher and has just published “Capitalism and the Enchanted Screen: Myths and Allegories in the Digital Age”. I was recently privileged to have not one, but two multi-hour conversations with him about censorship online in our times.
One of the recurring themes of our conversations was that regulation generally, and especially censorship, has the potential to take a peculiar form in a digital age. We termed this phenomena we termed uncentred and distributed regulation. It is not localized to a particular regulator, but moreover it is a chimera of “public” “private” and “civil society” regulation. Worryingly, this has the potential to overcome the regulatory limitations of these agents. Government regulation of speech in the United States is limited by the first amendment and private regulation is limited by the reality that no platform has a monopoly. By working in tandem, these agents might be able to create a more comprehensive form of regulation than any of them could operating alone.
Such uncentred and distributed regulation is not wholly novel. During the McCarthyist period, for example, the Hollywood blacklists and the cultural “purge” of leftism existed in a superposition between government activity (the house committee on Un-American activities), private activity (film studios) and “civil society”. Although it is just a hunch, Aleksandr and I suspect this is the future of regulation of the internet- multi-centered and evading simple classifications as either “public” or “private” censorship.
We are both very aware of the limitations of traditional frames for analyzing censorship. Censorship is often conceived as either on or off- either this thing is permitted or it is not (although this has always been an oversimplification to some degree). In an age of algorithmically generated newsfeeds, everything is, in a sense, either censored or anti-censored (promoted) as a function of the number of users it is shown to.
Simply demanding that we not be kicked off platforms is only the first layer- and corporations may be all too happy for us to restrict our attention to that layer, rather than the deeper mysteries of the algorithmic processes which operate these sites. The reduction of traffic to Mother Jones which you can read about here is a great example of how devastating censorship can be without a single thing ever being “censored” in the classical sense. This leads to a lot of interesting questions from a theorist’s perspective, for example, how can classical thinkers on censorship- from Mill to Marcuse- be reinterpreted and applied to these new modes of censorship and social engineering?
Moreover, these algorithmic processes can be to a certain degree, outside of direct human control- handled by machine learning programs that have been “trained on” goals like maximizing engagement. As critics of algorithmic biases have shown, this is far from the same as suggesting that they are independent of human prejudices since their training material is made and assembled by humans, and their goal is to cater to preferences which are themselves constructed in a flawed society. There is a danger that these algorithms might be used to “veil” human agency- to make controversial decisions on behalf of corporate actors, and so shield them behind a false neutrality.
Aleksandr raised an interesting issue which I had not considered. Traditionally there has been a distinction between what Aleksandr called “public speech”- directed to the world as a whole and more heavily censored and “banal speech”- directed to one’s immediate circle and usually subject to less censorship. Social media has eroded this distinction. This erosion of the distinction between public and banal speech has implications for issues far beyond censorship, and in my view deserves a book length treatment in itself.
The question of the hour on the left regarding online censorship is where should we stand on the deplatforming of Trump from social media? Here Aleksandr and I have a tentative disagreement. I lean towards the view that we should weakly oppose the ban whereas Aleksandr leans towards the view that we should be weakly supportive. Our overall view, on which we do agree, is that this is a question where process matters more than outcome. Twitter, Facebook etc. constitute an enormous portion of the public sphere- far too much to be treated as “just another corporation”, or like a small private meeting hall who can choose who to host. The sheer scale transforms it into some qualitatively different- more like a government than that small meeting hall. As such the critical question is not who is banned but who makes the decision? We both agree that, at least theoretically, the ideal would be a kind of direct and deliberative democracy of users.
Of course, while it is distorted by the extreme monopoly power of these corporations, the need to appease their user bases does give them a sort of limited democratic character. In this regard we are intrigued by the differences between Twitter and Facebook- with Facebook seemingly far more intent on appeasing the right, and Twitter aiming itself more at a liberal audience. We both wonder which way the causation runs here- has the more liberal user base of Twitter pushed it towards a more liberal stance, or has the more liberal political stance of Twitter attracted a more liberal user base? Are both true? If so, which direction of causation predominates? Neither of us has the answers on this topic, but we would commend it as a good case study to start digging into the relationship between social media platforms and their users.
Another difference of emphasis between us was: should we be more worried about dystopian future scenarios of corporate and governmental control on a scale hitherto unimagined, or should we be more worried about forgetting or normalizing how bad things are already? I tend worry more about nightmare futures, whereas Aleksandr tends to worry more about forgetting that we are, in many ways, already in cyber-hell.
One demand we discussed that could be taken up on these issues was a mandated “algorithimless feed option”. Users should have the option to press a button and enable a feed sorted purely on the order in which the posts were published- free from any sort of optimization. Facebook has this option at the moment but it’s not very good or accessible. While neither of us thought this should be mandatory, we both agreed that users should have the option on every social media site and similar, and that we would use it ourselves, at least sometimes.
We are aware that censorship online is far from purely a corporate or even governmental matter. There is a censorious instinct in us all, now more than ever. There are many causes- the bombardment of information, political hyper-polarisation etc. A continuum of censorship exists, from muting to trying to get someone fired from their job. Although we are broadly of the view that ordinary people should “turn down” the censorship dial in their head, obviously some kind of filter against obnoxiousness- and even the merely annoying- is necessary. But where is the line between good and bad here? A different kind of society than ours could have a sensible conversation about this, but under current conditions of social decay it is difficult.
We are both concerned about what I term “violent idealism”. Idealism in philosophy is the view that reality is fundamentally mental rather than physical. In social criticism it takes on a special and related meaning- the view that cultural norms etc. have precedence of material things- like concrete resources, their distribution, questions of power and capacities for violence etc. Honing in even further, by idealism here we mean the tendency to see the real goal of social reform as the improvement of discourse and “culture” understood as representations. The internet, by its very nature, tends to generate idealism. While we have many objections to idealism, so understood, we note particularly that in the digital context it tends to lead to overwrought and angry denunciation. If you think that someone’s bad take is literally all that is wrong with society then an overwhelming discursive response- including vicious insults, threats of violence etc., may be justified.
Language has sometimes functioned as a “truce” from certain forms of conflict- a place for working out problems discursively- if not always amicably. In identifying language as the goal, and the field, of social contestation, violent idealism threatens to undermine this. If there is no gap between enacting one’s program and discussing one’s program, sectarianism on the left and in social movements is sure to result.
In view of these considerations, we find ourselves, in some broad sense, “against cancel culture”. But we despair at the way opposition to distributed, DIY censorship has been appropriated as a right-wing issue- as if the right itself did not have many censorious instincts!