fbpx
NewsOpinion

Social Media and the Propagation of Far-Right Hate

Humanity has constantly strived to develop new and more advanced tools to feed our insatiable appetite to communicate with each other, and with the advent of social media and instantaneous messaging, it seems we are finally at the cusp of achieving this – at our own peril.

Humanity has constantly strived to develop new and more advanced tools to feed our insatiable appetite to communicate with each other, and with the advent of social media and instantaneous messaging, it seems we are finally at the cusp of achieving this – at our own peril.

Humanity has constantly strived to develop new and more advanced tools to feed our insatiable appetite to communicate with each other, and with the advent of social media and instantaneous messaging, it seems we are finally at the cusp of achieving this – at our own peril.

Social media platforms have emerged as powerful tools to communicate with individuals across the world, allowing us to disseminate knowledge; develop powerful campaigns that transform geopolitical landscapes; tap into international start-ups; and, of course, help us maintain better contact with friends and family. However, the instantaneous nature of these platforms, their accessibility, and the anonymity that they provide have also resulted in the platforms being rife with hateful and divisive rhetoric which is immeasurably dangerous to the aims of community cohesion.

A particularly insidious problem is that, within the shadows of the net, networks have developed with aims to capitalise on hateful rhetoric and methodically sow further discord by propagating far-right and populist narratives.

The recent report by the New York-based research institute, Data & Society, entitled: “Alternative Influence: Broadcasting the Reactionary Right on YouTube”, was one specific project that aimed to map this network, described as the Alternative Influence Network (AIN), which investigated 81 channels on YouTube that gave platform to around 65 different political influencers. The report describes “political influencers” as individuals “who shape public opinion and advertise goods and services through the ‘conscientious calibration’ of their online personae” by building audiences and ‘selling’ them far-right ideology.

The report argues that the AIN acts methodically to collaborate and reinforce their narratives, ultimately aiming to normalise far-right rhetoric and shift the ‘Overton window’, an idea that at any time there is a range of political views that are considered politically acceptable to the majority of society, into dangerous territory.

They deploy tactics of “brand influencers” such as developing “highly intimate relationships with their followers”, which are then exploited to pass on political opinions and views under the guise of being “light-hearted, entertaining, rebellious, and fun”.

Members of this network include infamous far-right activists such as Stephen Yaxley Lennon, also known as Tommy Robinson, founder of the extreme right-wing English Defence League (EDL); Richard Spencer, a prominent American white supremacist; and Lauren Southern, a Canadian far-right activist who was denied entry to the UK because of her anti-Muslim views.

However, what is notable is that the network also contains individuals who self-describe as “libertarians”, but connect with political influencers who are also self-described as “white supremacists”. One instance illustrating the problems of this is the YouTube ‘debate’ on scientific racism between Richard Spencer and Carl Benjamin, a self-described libertarian. The debate, which went live on the 4th of January 2018, was trending as the topmost video worldwide with “over 10,000 active viewers”. With Spencer having years of experience in spouting far-right rhetoric and justifying it with pseudo-science, many viewers were left with the feeling that Spencer had not only won the debate but that his views on scientific racism were even justifiable. Indeed, one fan eagerly commented: “I’ve never really listened to Spencer speak before but it is immediately apparent that he’s on a whole different level”. Benjamin, having engaged with Spencer, essentially gave a platform to this white-supremacist actor and allowed him access to his followers. Through this idea of connecting and collaborating, the AIN advertently and inadvertently propagated and reinforced far-right rhetoric.

Other social media platforms also experience similar problems whereby coordinated groups act in synchrony to create discord and propagate hate rhetoric.

The recent report by Demos, a UK-based cross-party think-tank, titled: “Russian Influence Operations on Twitter” considers the exploitation of “Twitter bots” by the Russian state. The report looked at datasets released by Twitter in October 2018, composed of around “9 million tweets from 3,841 blocked accounts”, which were associated with the Internet Research Agency (IRA) – a Russian organisation that was founded in 2013 and has been heavily criticised for exploiting social media platforms to push pro-Russian propaganda both domestically and internationally.

The report found that there was a significant amount of effort expended by the network of bots to propagate hate rhetoric against Muslims in particular.

Indeed, the “most widely-followed and visible troll account” shared more than 100 tweets, 60% of which were related to Islam. One such tweet was “London: Muslims running a campaign stall for Sharia law! Must be sponsored by @MayorofLondon! #BanIslam”. Another was “Welcome To The New Europe! Muslim migrants shouting in London ‘This is our country now, GET OUT!’ #Rapefugees”. The report found that the most frequent topic of tweets sent during the six months prior to the Brexit referendum was “Islam” and “Muslims”.

What is most worrying is that the technologies that are being utilised by such networks on these social media platforms are rudimentary and fairly easy to spot compared to what other new technologies are currently being developed to help propagate far-right rhetoric.

“Deep fakes” are a new form of technology that use machine learning techniques to generate videos and audio products that seem to show real people say or do things which they never did at a perplexingly realistic level. One example is the fake video of Donald Trump that was released in May by The Flemish Socialist Party sp.a., which led to hundreds of users on Twitter commenting on the President’s seemingly outrageous statements.

Therefore, the problem we face as a society is the issue of fake news – the dispersion of misinformation on social media platforms that occur all too often. However, the problem governments across the world face is a far more significant one.

Cyberspace is not a similar plane of existence as is the physical world – the regulations are far more difficult to enforce. Governments originate from ideas of centralised power and concrete objects, whereas cyberspace propagates ideas of decentralised power. Indeed, groups operating in cyberspace share the mindset of John Barlow’s 1996 Declaration of the Independence of Cyberspace:

“Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders”.

Governments must act to assert its authority over social media companies and hold them accountable for the hate prevalent on their platforms, because whilst we, as a society, have never been more connected, we are drowning in discourse and bombarded with politically-based rhetoric that is constantly propagating populist narratives.

It is imperative that governments pay far greater attention to the digital plane and the role of social media platforms in influencing the psyche of the nation. In Joshua Kopstein’s words, “it is no longer okay to not know how the internet works”.

Advertise on TMV

Related

Latest