Just over five years ago this week, Twitter (now X under Elon Musk) officially banned Donald Trump from using its platform.
The US president, who had lost the November 2020 election to Joe Biden and immediately questioned the legitimacy of the election, was at that point one of the most prolific Twitter users with almost 89 million followers.
His role as a public figure with a mass following was exactly the type of alchemy that social media platforms thrive on – mass reach that ignites conversations that spark massive and far-reaching engagements.
Read: Elon Musk’s tweets echo tactics used to great effect by Donald Trump
The intersection between advertising spend and user reach means social media platforms prize a mass following like the one Trump had.
Distributor or publisher?
The problem that Twitter faced was the question of whether it is a distributor or publisher of information and whether it has any responsibility to moderate or interfere with content posted by its users.
The distinction between the two is essentially the existential question for social media and similar platforms.
Just 30 years earlier, as the world wide web was becoming increasingly worldwide, a couple of cases in the US courts tried to tackle the distinction.
In the case of Cubby versus CompuServe, CompuServe had created an online general information service similar to a library where users could create special interest forums where issues germane to a particular focus area could be discussed.
Rumorville USA was a publication that participated in the forum and provided information about the journalism industry on the CompuServe platform (the conduit). CompuServe had no editorial discretion regarding the RumorVille posts. Some of the posts related to a competitor of Rumorville – Skuttlebut, which had been developed by Cubby Incorporated.
A case was then brought against CompuServe alleging that it had facilitated the publication of the purportedly false and defamatory statements made against Cubby and Skuttlebut and hence should be held liable.
The judgment in the case concluded that CompuServe had merely served as a distributor rather than a publisher of the content as it had no editorial discretion whatsoever relating to the posts in question. It therefore had no liability.
A few years later, a separate case involving Stratton Oakmont and Prodigy Services reached the conclusion that Prodigy Services – which offered services similar to CompuServe but crucially exercised much greater editorial control over what was posted on its platforms – was in the realm of publisher and could be held liable for the content published on its platforms.
‘Communications Decency Act’
ADVERTISEMENT
CONTINUE READING BELOW
This distinction – which would have had significant implications for the modern-day social media platforms – was essentially addressed in 1996 when the Communications Decency Act was passed by the US Congress.
Its most important element – Section 230 – essentially provided immunity for internet platforms who only serve as conduits for content rather than proactively engaging in modifying and editing the content.
Under the protective power of this immunity, platforms like Facebook and Twitter exploded and the safeguards they put in place focused on the straightforward issues like protection of minors and identifying and moderating explicitly harmful content.
Much more complicated for these platforms was the question of balancing political rhetoric, freedom of expression or free speech, and the undeniable correlation between the power of information distribution and its influence that these social media platforms ended up with.
Facebook’s Cambridge Analytica scandal had its genesis in the alchemy of these factors – unfiltered political expression, mass reach, and aggregation of content targeted at specific users.
The ability to use social media platforms to create targeted political messaging was one thing; it was the question of whether such political messages – especially if they were inaccurate or misleading – should be moderated or deleted by the platforms themselves that dribbled everyone.
Read:
The risk was clear: the more involved the platforms in moderating content, the greater the chance they would be accused of being publishers rather than innocent distributors.
When one considers the potential liability issues in a highly litigious society, the retreat towards doing the minimum was the preferred approach.
Even when the political polarisation and misinformation escalated, the risk of gravitating towards censorship led platforms like Facebook and Twitter to find other ways of addressing the reality of their platforms being abused and also managing the risk of avoiding violating the immunity spectrum of Section 230.
As Trump’s political rhetoric escalated ahead of the 2020 election, the conundrum escalated and tentative ways of managing the process were preferred to outright bans.
It was only in January 2021, when Trump had lost political power, that Facebook and Twitter were emboldened to implement outright bans on him.
The idea of banning a man who then still had presidential powers and could have issued an executive order to abolish the protections of Section 230 had been scary enough for Facebook and Twitter to tolerate and manage Trump rather than banning him while he was still president.
From the moment he lost his powers, the platforms were emboldened to act as a business decision rather than a matter of principle.
If that was the calm, then came the storm …
All of this would have probably remained the case, but developments affecting US politics since then have once again revived the problem.
While Trump lost the 2020 election, he did not fade into the oasis of retirement but decided to run for the presidency again in 2024 and create his own social media platform to cover the gap created by his Twitter ban.
ADVERTISEMENT:
CONTINUE READING BELOW
Purported free speech absolutist Elon Musk then nailed his political colours to the Trump bandwagon and acquired Twitter.
This led to an invitation for Trump to reactivate his Twitter account and saw Musk dismantle many of the safeguards Twitter had implemented to help manage the tightrope between enabling free speech and exercising responsibility in order to avoid falling foul of Section 230.
Before the Musk takeover, aggrieved users could complain about offending posts and moderators could exercise educated discretion in responding to complaints.
This meant if anyone ever had to accuse moderators of gravitating towards editorial rather than conduit characterisation, the complaints lodged could be the defence that retained the immunity protection.
When this was gradually and systematically dismantled, it meant that Musk’s Twitter could claim to be the absolutist distributor that could never be accused of interfering with content.
The inelegant and flawed compromise that X adopted? The Community Notes model – which put the buck of ‘moderation’ in the hands of other users.
Users who felt aggrieved by this shift were free to leave, and Musk’s commitment to a free speech town square was entrenched.
And then came AI …
Problematically though, another development to escalate in recent years has created new problems for social media platforms.
Artificial intelligence (AI) has become the new phenomenon fast taking over the world of technology.
Read: Artificial intelligence in South Africa comes with special dilemmas
The ability of AI tools to create and generate content is at once innovative and a poisoned chalice.
In recent days, one of the more prominent AI tools – Grok – which is attached to the Twitter owner and serves as the most prominent AI/chatbot on X, has responded to user instructions requesting the undressing and re-dressing of individuals whose pictures are uploaded on the internet.
This means Grok applies its AI powers to alter images and create updated versions based on the requests of individuals who do not have to be the owners of the pictures or be the ones actually in the pictures.
More alarmingly – and predictably – the actual owners of the pictures, or the ones depicted in the pictures, are not required to provide any consent for the alteration of the pictures.
ADVERTISEMENT:
CONTINUE READING BELOW
This has led to alarming – and yet again predictable – results where individuals have seen their images altered in manners that are unsolicited and often disturbing.
For all of Grok’s purported intelligence, it remains incapable of distinguishing between minors and retaining sufficient memory across multiple interactions to actually recall that someone may have indicated that their picture should not be altered.
This creates the risk that Grok can respond to a prompt to undress a minor and then claim it was merely responding to a user instruction.
If an individual explicitly directs Grok to not execute prompts relating to their pictures, Grok’s commitment to honour that request stretches no further than that interaction as it lacks the capacity to retain a database of requests and instructions relating to the same picture.
Perhaps such a proposition is fanciful anyway as Grok would have to possess the capability to identify all pictures relating to a user who has made a request for their pictures not to be altered.
This sounds highly improbable, and users with a large picture footprint across the internet probably do not have any real recourse in this arena.
A step too far?
The problem with such AI tools retaining the ability to alter and repurpose pictures is that it leaves us all at risk of ending up with an archive of unsolicited publicly available materials that are a misrepresentation of what we once did or dressed up like.
In a world where everything can be weaponised, it is difficult to see how anyone wins in this unfiltered and unaccountable universe.
The harm inherent in these developments may just be the one red line that everyone agrees is a step too far.
For the platforms themselves, the proposition that they are merely conduits may still suffice – but in the world of X, Grok and Elon Musk, it surely cannot be appropriate that a tool so embedded in the workings of Twitter/X can still claim to be an innocent conduit when it has clearly assumed the role of creator, manipulator and disseminator of what is increasingly harmful content.
Unfortunately for all of us, the rules of accountability across the world are inconsistent and so opaque that very few of us will have the ability to obtain due recourse.
This is perhaps the one time where regulators across the globe need to think differently about what the right balance between expression, privacy, citizen protection and platform accountability looks like.
One hopes that materialises before irreparable harm is done to innocent citizens who only ever wished to connect with friends and families across social platforms.
Follow Moneyweb’s in-depth finance and business news on WhatsApp here.
#perils #unfettered #immunity