Substack is the latest digital communication platform to set off a freedom of expression debate after the co-founder Hamish McKenzie stated that the platform would not demonetize those posting extremist content on the newsletter platform — namely Nazi and white supremacists.
The comments sparked both condemnation and defence. Some newsletters left the platform, other writers published protests of the policy, and some defended the stance under freedom of expression.
The response made Substack change tack. As Alex Hern commented in his TechScape column for The Guardian on 9 January:
Substack is speedrunning a process that every social network — every internet platform — has gone through at some point or another. First, we don’t need moderation (unless you’re a sex worker); then, we care about free speech so we won’t do moderation; then, we’re going to moderate only the most egregious and uncontroversial examples.
Among the complicating factors of freedom of expression on social media and social networks is ownership and scale — in short, economics.
Extremist views have always found a “publisher” or platform, even if that publisher is the holder of the view and the platform a small circle of like-minded friends.
However, social networking sites have the capacity to, at minimum, heighten the visibility of people and views, and, at most, transform such individual or isolated rants into viral posts and make minority opinions seem like the majority trends.
With views, engagement, and subscribers key to any platform’s financial success, anything that catches interest and engagement and subscribers adds to profit. Whereas any move to limit content not only adds costs for moderation and programming but potentially reduces participation and thus income.
So, it is predictable from an economics standpoint — even without an owner avowing to be a free speech advocate — that for social networking sites, from Facebook or X to Substack, freedom of expression is the default argument to little or no moderation of individual content. And then, that the main driver to change that policy is, basically, economics — either people unsubscribe or the government threatens fines.
At a recent workshop on digital gender justice organized by WACC and the World Council of Churches, Lucina Di Meco, co-founder of #ShePersisted, highlighted the global prevalence of disinformation against women in politics — including how the economics of social media companies are central to the problem.
“Social media platforms are curators of content,” she said — and their digital platform algorithms reward and amplify hate for profit.
While highlighting the need for a global framework to regulate social media platforms and prevent disinformation, she also pointed out the potential of applying consumer protection law to hold social media platforms accountable for the harms that their products cause to their own customers.
“They are private companies that have consumers, but who is protecting their consumers from the negative effects of what they experience online, and its impacts on democracy and social cohesion?” she asked.
Such an application would require government action, both in regulation and in enforcement. And there are signs that governments are taking steps, if not yet bold ones. As Hern remarked:
One of the more satisfying things to have happened over the long decade I’ve been covering technology is watching governments slowly realise that they actually have power and can use it to do things. It’s a stunning insight, I know. And yet it is increasingly common for an industry to be overturned by, effectively, a state waking up one day and realising that it doesn’t really like the status quo.
While freedom of expression is a fundamental right, accountability — from the individual to the company monetizing the content — is also essential.
We have to work for an ecosystem that enables one and protects all.
Photo: Fizkes