Humanity has constantly strived to develop new and more advanced tools to feed our insatiable appetite to communicate with each other, and with the advent of social media and instantaneous messaging, it seems we are finally at the cusp of achieving this – at our own peril.
Humanity has constantly strived to develop new and more advanced tools to feed our insatiable appetite to communicate with each other, and with the advent of social media and instantaneous messaging, it seems we are finally at the cusp of achieving this – at our own peril.
Humanity has constantly strived to develop new and more advanced tools to feed our insatiable appetite to communicate with each other, and with the advent of social media and instantaneous messaging, it seems we are finally at the cusp of achieving this – at our own peril.
Social media platforms have emerged as powerful tools to communicate with individuals across the world, allowing us to disseminate knowledge; develop powerful campaigns that transform geopolitical landscapes; tap into international start-ups; and, of course, help us maintain better contact with friends and family. However, the instantaneous nature of these platforms, their accessibility, and the anonymity that they provide have also resulted in the platforms being rife with hateful and divisive rhetoric which is immeasurably dangerous to the aims of community cohesion.
A particularly insidious problem is that, within the shadows of the net, networks have developed with aims to capitalise on hateful rhetoric and methodically sow further discord by propagating far-right and populist narratives.
The recent report by the New York-based research institute, Data & Society, entitled: âAlternative Influence: Broadcasting the Reactionary Right on YouTubeâ, was one specific project that aimed to map this network, described as the Alternative Influence Network (AIN), which investigated 81 channels on YouTube that gave platform to around 65 different political influencers. The report describes âpolitical influencersâ as individuals âwho shape public opinion and advertise goods and services through the ‘conscientious calibration’ of their online personaeâ by building audiences and âsellingâ them far-right ideology.
The report argues that the AIN acts methodically to collaborate and reinforce their narratives, ultimately aiming to normalise far-right rhetoric and shift the âOverton windowâ, an idea that at any time there is a range of political views that are considered politically acceptable to the majority of society, into dangerous territory.
They deploy tactics of âbrand influencersâ such as developing âhighly intimate relationships with their followersâ, which are then exploited to pass on political opinions and views under the guise of being âlight-hearted, entertaining, rebellious, and funâ.
Members of this network include infamous far-right activists such as Stephen Yaxley Lennon, also known as Tommy Robinson, founder of the extreme right-wing English Defence League (EDL); Richard Spencer, a prominent American white supremacist; and Lauren Southern, a Canadian far-right activist who was denied entry to the UK because of her anti-Muslim views.
However, what is notable is that the network also contains individuals who self-describe as âlibertariansâ, but connect with political influencers who are also self-described as âwhite supremacistsâ. One instance illustrating the problems of this is the YouTube âdebateâ on scientific racism between Richard Spencer and Carl Benjamin, a self-described libertarian. The debate, which went live on the 4th of January 2018, was trending as the topmost video worldwide with âover 10,000 active viewersâ. With Spencer having years of experience in spouting far-right rhetoric and justifying it with pseudo-science, many viewers were left with the feeling that Spencer had not only won the debate but that his views on scientific racism were even justifiable. Indeed, one fan eagerly commented: âIâve never really listened to Spencer speak before but it is immediately apparent that heâs on a whole different levelâ. Benjamin, having engaged with Spencer, essentially gave a platform to this white-supremacist actor and allowed him access to his followers. Through this idea of connecting and collaborating, the AIN advertently and inadvertently propagated and reinforced far-right rhetoric.
Other social media platforms also experience similar problems whereby coordinated groups act in synchrony to create discord and propagate hate rhetoric.
The recent report by Demos, a UK-based cross-party think-tank, titled: âRussian Influence Operations on Twitterâ considers the exploitation of “Twitter bots” by the Russian state. The report looked at datasets released by Twitter in October 2018, composed of around â9 million tweets from 3,841 blocked accountsâ, which were associated with the Internet Research Agency (IRA) – a Russian organisation that was founded in 2013 and has been heavily criticised for exploiting social media platforms to push pro-Russian propaganda both domestically and internationally.
The report found that there was a significant amount of effort expended by the network of bots to propagate hate rhetoric against Muslims in particular.
Indeed, the âmost widely-followed and visible troll accountâ shared more than 100 tweets, 60% of which were related to Islam. One such tweet was âLondon: Muslims running a campaign stall for Sharia law! Must be sponsored by @MayorofLondon! #BanIslamâ. Another was âWelcome To The New Europe! Muslim migrants shouting in London ‘This is our country now, GET OUT!’ #Rapefugeesâ. The report found that the most frequent topic of tweets sent during the six months prior to the Brexit referendum was âIslamâ and âMuslimsâ.
What is most worrying is that the technologies that are being utilised by such networks on these social media platforms are rudimentary and fairly easy to spot compared to what other new technologies are currently being developed to help propagate far-right rhetoric.
âDeep fakesâ are a new form of technology that use machine learning techniques to generate videos and audio products that seem to show real people say or do things which they never did at a perplexingly realistic level. One example is the fake video of Donald Trump that was released in May by The Flemish Socialist Party sp.a., which led to hundreds of users on Twitter commenting on the Presidentâs seemingly outrageous statements.
Therefore, the problem we face as a society is the issue of fake news – the dispersion of misinformation on social media platforms that occur all too often. However, the problem governments across the world face is a far more significant one.
Cyberspace is not a similar plane of existence as is the physical world – the regulations are far more difficult to enforce. Governments originate from ideas of centralised power and concrete objects, whereas cyberspace propagates ideas of decentralised power. Indeed, groups operating in cyberspace share the mindset of John Barlowâs 1996 Declaration of the Independence of Cyberspace:
âGovernments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your bordersâ.
Governments must act to assert its authority over social media companies and hold them accountable for the hate prevalent on their platforms, because whilst we, as a society, have never been more connected, we are drowning in discourse and bombarded with politically-based rhetoric that is constantly propagating populist narratives.
It is imperative that governments pay far greater attention to the digital plane and the role of social media platforms in influencing the psyche of the nation. In Joshua Kopsteinâs words, âit is no longer okay to not know how the internet worksâ.