Why It’s So Hard to Erase Hate Speech Online
A Netizen’s Guide to the 21st Century’s Eight Hells
The good news is that we’re not quite completely numb to horror.
That’s an odd way of starting off a piece about online hate speech and why it’s so hard to stamp out. But in the wake of horrific acts of violence (from New Zealand to El Paso) that were born — at least in part — online, it is important to take a moment and acknowledge that we’re not numb.
As a species, humanity hasn’t accepted this cycle of hatred and violence as our final resting state, even if the online world we’ve built seems to be uniquely suited to spread anger, fear and hatred.
Some of that seems to be a function of human psychology: it’s always easier to believe something bad about yourself or someone else than it is to believe something good. Social media technology puts an emphasis on sharing, and the more you’re exposed to extreme ideas, the less resistance you have to share them.
What follows is a Dante-like walk through the 21st Century’s eight digital hells. Like the poet, we start at the outer gates of the inferno and make our way down to the pit itself.
While no longer as “anything goes” as it once was, it’s still possible to find subreddits (translation for boomers: a forum dedicated to one topic moderated by a user) with a dark edge to them on the “front page of the internet.”
2015 saw a wave of forums shut down for harboring hate, and in the wake of the Charlottesville white supremacist rally that led to violence and death, the company began another purge. The 2015 purge led to a backlash against the company, but post-Charlottesville the culture has changed (in part because the worst offenders haven taken refuge in other online spaces).
Everyone knows that conversations on Facebook can devolve into hate speech, but the social network doesn’t quite have the reputation of being where the worst of the worst organize.
Unfortunately with a user base that numbers in the billions and a reach that nearly encompasses the globe (China, not so much), even if a small percentage of malicious posts get through that can mean millions of users exposed to hate speech and plenty of hate groups connecting.
To deal with the glut of content, Facebook relies on outside contractors to review and moderate all kinds of nastiness. It’s a job that leads to burnout, and sometimes even the radicalization of the moderators. It turns out that if you look at posts about things like flat-earth conspiracies all day, you may start to believe them.
When Facebook has taken a stance on policing political speech in the past, they’ve been called out — particularly by right-wing politicians, who have a major talking point around conservatives being silenced online. Purges of political pages
This would be less of a problem if white nationalist terrorists, like the one who took so many lives in Christchurch, didn’t draw inspiration from mainstream right-wing politicians. Not to mention the fact that Facebook’s drive to get people to embrace
Facebook may be the center of gravity online, but the bleeding edge of culture happens on Twitter. When it comes to hate groups, the company has long had a problem — some of it technical, some seemingly a matter of will — with dealing with the worst offenders.
Twitter, after all, is where public shaming and harassment campaigns are a daily occurrence. There isn’t a part of the political spectrum that isn’t guilty of brigading users for one reason or another. But the presence of neo-Nazis and radical anti-feminists on the platform have led to cultural firestorms that have largely set the tone for the cultural conversation.
Meanwhile, in March, Republican Congressman Devin Nunes sued the company for $250 million for allowing parody accounts like @DevinCow, to mock him. Lawsuits like this can have a chilling effect on the company’s moderation policies and could lead to Twitter clamping down on political speech across the board. That, or back away from doing any moderation at all if it is deemed less likely to lead to costly lawsuits.
At this point in its life, YouTube is practically a utility. So much video is uploaded onto YouTube each minute that it would take many lifetimes to watch it all, let alone moderate it.
To organize it, the company turns to algorithms. Not just so that there are nice buckets of content, but so that people keep watching video after video. All so that YouTube can show you five seconds worth of an ad you can’t skip through.
As it turns out, this incentive to keep people watching is also a great tool for radicalizing people politically, or organizing child porn rings in comments sections. By creating a system that values similarity — you liked this, and so did someone who liked something else you liked, so maybe you’ll like this other thing they liked — YouTube has basically unlocked the internet’s id.
And then there is the simple fact that YouTube is home to all kinds of garden variety hate — from crappy comments to hours-long screeds over genre movies. On YouTube, cultural warfare is entertainment, and that entertainment leads to big bucks for a few content creators and billions for Google. It certainly doesn’t help YouTube when its biggest star is name-checked in terrorist screeds — whether that name check was sincere, malicious or something else entirely. It still happened, and it makes the company look like it is at best a clueless pusher of the worst aspects of humanity.
Discord is the glue that holds many communities together, with a user base that revolves around gaming. Discord offers voice chat and instant messaging and runs on just about everything this side of an internet-connected toaster. There are more than 200 million users of Discord as of December 2018, if Wikipedia is to be believed.
Discord the company has become more proactive about shutting down servers where hate groups congregate — the software was used by the organizers of the white supremacist rally in Charlottesville, which led to action by the company. However Discord’s greatest strengths — the ease of setting up a community, consistent user identity across servers and searchable public servers — is fully exploitable by those looking to create boltholes and for recruiting vulnerable individuals into extremist thought. Which is why we find Discord here, just next to YouTube.
Stamping out servers full of trolls and white supremacists can be a game of whack-a-mole. One where it isn’t always possible to see the critters if they don’t want to be found.
Born in part as a protest to perceived bias by Twitter against conservative voices, Gab was created as an alternative to the microblogging site. It became a go-to platform for the full spectrum of the far right: from hard-core conservatives all the way out to neo-Nazis.
Under the banner of free speech, you can find all manner hate speech, and the Southern Poverty Law Center identifies Gab as the platform that radicalized the man who attacked a Pittsburgh synagogue last October. While Gab claims to have 850,000 registered users, the SPLC asked social media analysis site Storyful to look into that number. They found that just “19,526 unique usernames had posted content” in a one-week period in January 2019.
Gab’s reputation for being ground zero for some of the most radical right-wing extremists has led to it having problems with host and payment companies. Still, while Gab is small, its community has managed to have a disproportionate impact on the world through the violent acts of its most unhinged members. A key difference between Gab and the other social media companies mentioned is that they don’t appear to be phased at all by the fact that people are being radicalized on their site.
The imageboard site for people who are too extreme for 4chan’s anonymous threads. 4chan used to be the bottom rung of the internet before you had to spin up a Tor router and make for the “dark web,” but 8chan is now the internet’s filthy truck stop toilet. You might think that’s an insult, but honestly, that’s putting it mildly. And an 8chan member would likely be flattered, right before they (GRAPHIC DESCRIPTION DELETED).
8chan ethos revolves around a radical approach to free speech that is buried deep in the internet’s DNA. Shock and grotesqueries have long been a currency — particularly amongst young men and adolescent boys —online. Yet gallows humor and a thirst for being desensitized to the worst that humanity has created can make for a feedback loop that makes radical juvenilia indistinguishable from political extremism. At a certain point, it no longer matters if someone is spreading hate memes for the lulz or to terrify others. What matters is the impact.
Everything we’ve walked through so far is fairly easily accessible online. If you know your way around a search engine — and most netizens do — you can find these sites and services. Odds are you found this article ON one of those services.
But there is another layer: the “dark web.”
Not so much a place as an idea, the “dark web” refers to all the sites, chat rooms, and little spaces online that aren’t publicly accessible. The dark matter of the online world, the “dark web” requires the use of services like Tor anonymity software to access an alternate universe of content.
What’s out there varies wildly, and it’s a mirror of the vanilla internet. It’s also where hate groups would slip away to if by some fiat of governmental or industry action all the Nazis and ISIS types were banned from the publicly-accessible net.
There’s a tradeoff to that idea, of course: by being forced out of the daylight, it will be harder for those groups to recruit from places like YouTube and Discord. It will also be harder for them to be monitored.
This story was originally published on March 27, 2019, and updated on August 13, 2019.