Algorithmic content sorting shows that internet media companies are failing to see the humans that they are serving. Algorithmic sorting seeks to ‘show us the best content’ but Thomas Carroll argues that it is being manipulated by alt-right groups to serve darker purposes.
Before the notion of algorithmic content most social media websites centred on a chronological timeline of posts, others may have used rating systems. Arguably one of the biggest sites to first implement algorithmic sorting was Reddit.
Reddit’s algorithm for ‘hotness’ was freely available but recently they have switched to a much more obscure ‘best’ code. Luckily, Reddit is fairly open with how it operates it’s algorithmic sorting (although that doesn’t mean it’s immune to manipulation – we’ll get to that later).
Other websites like Facebook and Twitter are much more secretive, and perhaps for good reason, as unclear sorting can allow for advertising to be inserted much more covertly. For instance, at the instigation of Facebook’s algorithmic timeline many online businesses saw a clear drop off in terms of impressions and engagements. This of course meant that Facebook could now charge money for better ranking in the algorithms to businesses that lost out. However, I believe an unforeseen problem arises from this. Removing the impartial nature of chronological sorting and replacing it with algorithms creates a game that adversaries can play.
The alt-right know the internet, they were born in it and moulded by it, to quote Bane. They know the algorithms that drive these sites (… “the shadows betray you, because they belong to me”). This allows for an alarming spread of under-the-radar propaganda. Recommended videos for political or philosophical videos are often filled with anti-feminist and racist videos that work to instil ideas and indoctrinate people who are curious on the internet.
In a time where children are allowed to freely explore YouTube more needs to be done to rid ourselves of the broken algorithms transmitting them dangerous content by design. Despite claims of work against them by social media companies, why are these alt-right propaganda posts floating to the top of our feeds?
The first and probably most accurate answer is that getting rid of it is hard for programmers – computers can’t easily tell what and what isn’t hateful content. A lot of the time filtering like this is done with keywords which can ultimately be quite damaging to conversation. Blocking all mention of certain words or phrases is bordering on censorship. And so it is with algorithmic content too – the removal or de-prioritisation of a type of content brings about distrust in your user base.
This leads onto one of the secondary ideas: that these viral pieces of content (especially with capital-driven algorithms like Facebook and YouTube) generate income for the services. The corporations behind them make a trade-off for profit over ethics. Most large social media companies outwardly support progressive causes but potentially are allowing anti-progressive media to rise to the top of their algorithms as it generates the money that allows the services to run.
There are theories surrounding the death of Google reader, and thus RSS, which suggest that Google wanted to remove the autonomy of the user and provide a centralised news source of which algorithmically-selected sources, and thus those that have paid Google, show up first (i.e. Google News). This results in a further centralisation of the web and more advertising money for the big websites.
.@Google killed its Reader in 2013 because RSS as a format gives readers agency, doesn't track browsing to sell ads, and lets the user chose what they want to read. As opposed to algorithmic personalisation which siloes us into increasingly homogenous demographics for advertisers https://t.co/YAThAP6bdO
— Luc Lewitanski (@LucLewitanski) July 2, 2018
To bring both of these to a more visible synthesis, YouTube has demonetised videos (removed their creators’ advertising rewards) containing the word ‘transgender’, though all the while releasing videos that claim YouTube supports the trans community. Whilst this isn’t out-and-out censorship it further marginalizes the community which is wholly negative.
Another example of this is around the 2016 presidential election where Facebook’s (now defunct) ‘Trending’ panel that was once moderated by humans (who tend to be better at spotting things like this), transitioned to algorithmic control. Within two days fake news appeared within the panel. Alongside this, potentially fuelling conspiracy theories, Facebook had met with a group of conservatives to discuss a perceived anti-conservative bias in the panel in May of that year.
Vote-based algorithms are also left open to abuse due to an internet phenomenon known as brigading, which is getting people with a common cause to pile on positive votes. The alt-right are good at this especially, once organising ‘raids’ from 4chan.org’s /pol/ board and now moving to some more secretive private chatrooms (mainly on Discord it seems). Algorithmically sorted content allows both guerrilla marketers to buy votes for their posts and also allows political groups to promote their posts all whilst simulating a natural, unpromoted presence. Done well it can create the impression of there being substantial support for the positions in the post; which ultimately is dangerous because a vocal minority can very quickly seem a lot larger than it is with multiple sock-puppet accounts and clever (but nonetheless fake) conversations. Sometimes it is obvious, sometimes it is not.
For the left, a group that did not originate out of internet use, these operations are beyond the scope of traditional action (that is: protest, rallies and strikes), which the alt-right no doubt use too but have successfully synthesised with subtle (and not-so subtle) indoctrination from the internet. Perhaps the left needs to adapt to the quickly changing conditions at present because to a large extent “the left can’t meme“. Perhaps this is because the alt-right are anti-liberal, through simple memes they can subvert mainstream narratives successfully and efficiently, whereas the public’s stereotypical view of the ‘left’ today tends to range between easy-going social democrats, more vehemently principled labour movement supporters or identity politics advocates. For the revolutionary left this leaves a problem with internet-based challenges to mainstream narratives: how do you establish a meaningful subversive position, and what do you subvert, without first having to laboriously explain the differences between the essentially conformist centre left and the radical left, which would take the sting out of the message and undermine its ability for rapid, viral expansion. The ever-expanding Overton Window seems to guide neoliberal politicians to beg, borrow, and steal from the left and the right, which can only be adding to the misunderstanding amongst the public – creating the potential for the left to be seen as just the most enthusiastic and outspoken proponents of neoliberal identity politics.
For a revolutionary left meme, we can turn our heads to Fully Automated Luxury Gay Space Communism, which reframes old Marxist ideas about reducing labour and poverty through industrial production with new fancy modern ideologies (post-scarcity, gayness, SPACE!). You can judge for yourself just how effective this meme is at presenting communism in a new light and influencing people towards a leftist viewpoint but it has made an impact.
Even so, the alt-right seem to have a voice like no other movement on the internet. However, I posit that this is because of their abilities to understand the communication channels that we use rather than being due to any particularly incendiary aspect to their politics. To fight back, people need to gain an awareness of how the content they can see is manipulated. However, we must not lose sight of what is the true battlefield when it comes to indoctrination into neo-Nazi organisations (which the online alt-right are intimately affiliated with), which is real life, where young and hopeless men are all too taken-in by arguments of tribalism and alienation from ‘those other people’.
In conclusion, we need to ask ourselves: what are the algorithms for? Clearly it is not for an increased experience, and in fact it more often than not degrades the user experience. These algorithms are implemented to allow certain voices to be louder than others, whether it be through the plain influence of capital or a through a technical manipulation of the way they work. Algorithms take the voices away from many who were once able to express themselves freely on the internet and replace them with corporate interests and trolls. Those who are considering an algorithmic feed for their website should ask themselves: what is the effect this will have on my community?