“Give a man a mask and he will show you his true face” is one of Oscar Wilde’s best-known witticisms, but, in his day, few could have imagined that it would turn out to be one of his most prescient as well. Wilde understood that anonymity offered a chance to vent frustrations and opinions that one may not wish to say to their partner, priest or proprietor long before social media made it possible for anyone to make their voice heard by everyone without having to actually speak to anyone.
Poway, California was the site of the 2nd synagogue shooting in the past six months. Both perpetrators can be traced to a toxic internet culture that rejoices in its ability to offend polite society. The white supremacist terrorist who attacked a New Zealand mosque also left the calling cards of far-right meme culture all over his livestream of the attack which was shared widely in the wake of the horrific shooting. Hate for its own sake has carved out a niche in the internet’s most popular destinations.
It is in this context that the internet’s potential for connectivity has been hijacked by those intent on spreading hate speech: they harness open forums to disseminate half-truths, offensive memes and edited videos to advance their noxious agenda.
It was once thought that these harmful ideologies could be confined to the backwaters of the internet, discussed ad infinitum among true believers but never contaminating general discourse. However, a steady stream of violent attacks inspired by online rhetoric is illustrating the deficiencies in the prevailing cultural belief that starving racists of attention is the best strategy while raising questions about the efficacy of self-regulation.
As disturbing as solitary acts of internet-inspired terrorism are, social media has already been directly implicated. YouTube in particular has emerged as a haven for extremist voices preaching everything from flat earth hokum to conspiracy theories that place migrants at the center of an elaborate plot to end Western civilization. The paranoia has been allowed to proliferate because YouTube quickly discovered that those with opinions far outside the mainstream relied on the site for daily briefings that backed up their views. More viewing time meant more money for YouTube, so an algorithm was quickly created that recommended content based on its ability to retain viewers and not its factual accuracy. Over time this meant that a feedback loop was created which continually amplified the most extreme channels.
One does not need to look far on YouTube to encounter overt white nationalism or neo-Nazi apologia. In fact, YouTube will as often as not suggest toxic content for their impressionable viewers.
A Policy of Failure
The repeated failures to reign in the most toxic speech should tell us that alternative solutions need to be on the table. Both parties in Washington have so far been reluctant to interfere with the online ecosystem based on their shared belief that any government interference would sap the dynamism of the tech sector.
But the reluctance to meddle with the practices of a private company have to be chalked up to more than just the market-based consensus that dominates Washington. It can also be attributed towards a philosophical worldview that takes the label “speech” at face value – separating hateful rhetoric from the violence it inspires and treating the two as distinct instead of closely intertwined. The inability to connect hate speech with its consequences prevents purveyors of online hate from being viewed in the same light as those that take their words seriously and act on their recommendations.
The do-nothing attitude that has allowed hate speech to persist is only slightly defensible because social networks have proven somewhat capable of quarantining offensive speech – when they find it convenient. Twitter has had no trouble dismantling the once-impressive network of accounts that ISIS used to parrot its propaganda and win recruits. Nor will beheading videos be found on YouTube, as Google stepped up efforts to identify and eliminate ISIS content. But Silicon Valley remains unwilling to turn its impressive censorship apparatus on white nationalist voices, despite their ability to inspire similarly grotesque acts of terror.
But the U.S. government is not totally unwilling to neglect its laissez-faire principles and apply pressure to social media platforms when it sees fit. For example, Iranian government officials were booted off of Instagram when Trump decreed that the nation’s Revolutionary Guard constituted a terrorist organization. YouTube has also seen fit to apply disclaimers warning of bias to videos from Russia-backed RT and Venezuelan-funded TeleSur but does not find it necessary to warn of bias or propaganda that may originate from other state news agencies like the BBC.
Less Carrot, More Stick
In an online ecosystem that is coated in every variety of hate speech, we should win no plaudits for simply refusing to add to the pile. Action must be taken at the individual level to combat hate speech wherever it rears its ugly head and at the policy level to apply pressure on social media networks that forces them to tackle harmful rhetoric in spite of potential damage to their bottom line.
Hate speech is a cancer that infects every facet of online life and is spilling over into the real world with devastating consequences. A societal problem on this scale simply cannot be addressed through individual action alone or from a misplaced trust that those who profit from the increased interaction that hate brings to their platforms will voluntarily clamp down on violent rhetoric. Defeating threats to public safety requires the use of every tool at the disposal of the public – and that could mean regulating social media to require them to come to terms with the very real problems created by content found on their platforms.