Over the past few years, hate crimes have steadily increased across the country. Despite being notoriously underreported — or simply not taken seriously when brought to authorities — the FBI still found that between 2016 and 2017, hate crimes rose 17 percent. It’s no stretch to correlate the rise of hate crimes with the United States’ political climate, including the election of President Donald Trump and the subsequent rhetoric that his administration ushered in.
With the political climate continuing to worsen, it’s important to be aware of how hate operates online too. According to a survey from the Anti-Defamation League, 2018 turned out to be a record year for online hate speech. Social media platforms ranked especially high, with over half of all respondents (56 percent) saying they experienced hate on Facebook; meanwhile, Twitter and YouTube clocked in at 19 percent and 17 percent, respectively.
Often, people separate what happens online from “real life,” as if the digital space is unable to influence or be influenced by the rest of the world. However, recent actions that cost people their lives have shown this separation can no longer be entertained. The 2017 white supremacist “Unite the Right” gathering in Charlottesville — where Neo-Nazi James Fields plowed through a crowd, killing Heather Heyer — was planned on Discord and other closed channels. White supremacists also used Facebook chats to plan a gathering for “white civil rights” in Washington, D.C., which Slate referred to as a “sequel to the deadly Unite the Right rally in Charlottesville.”
Earlier this year, a gunman killed 51 Muslims during Friday prayers in Christchurch, New Zealand, and live-streamed the entire event on Facebook. Before making his way to the mosque, the gunman told viewers to “subscribe to PewDiePie,” a popular YouTuber who has come under fire for amplifying antisemitic rhetoric and white supremacist propaganda. It is clear that online hate has resulted in the deaths of people offline and social media platforms need to do more to address it.
People often point to policy changes for social media platforms as the solution. Those are important, but they will not solve the problem in its entirety. Part of the issue is that none of these platforms have the expertise to properly identify or moderate hate. For example, Facebook announced a ban on white nationalism this year following pressure from civil rights groups. However, the company then said a video invoking the myth of white genocide — common white nationalist propaganda — didn’t fall under the ban. For social media platforms to combat white supremacy, they have to implement their policies, even in cases where it may sweep up conservative politicians.
Understanding hate speech requires an analysis of power. People have to take into account which groups are experiencing marginalization in real life, and are vulnerable as a result. Time and time again, social media platforms have failed at this by treating all speech as equal, which has disastrous consequences. For example, Facebook has banned users for saying “men are trash,” but allowed InfoWars host, Alex Jones, to remain on its platform for years.
There is also the issue of hate coming from within government sectors. For example, President Donald Trump has participated in smear campaigns against sitting Congresswoman Ilhan Omar. Attacks against Omar are generally motivated by anti-Black Islamophobia, which Trump made clear when he took Omar’s quote out of context and combined it with footage from 9/11. Twitter has been wrestling with how to handle instances where world leaders like President Trump break the company’s policies.
In a recent blog post, Twitter shared its solution:
“There are certain cases where it may be in the public’s interest to have access to certain Tweets, even if they would otherwise be in violation of our rules. On the rare occasions when this happens, we’ll place a notice — a screen you have to click or tap through before you see the Tweet — to provide additional context and clarity.”
Twitter, like most tech companies, is in a tricky position navigating this issue. However, the company has allowed white nationalism to fester on its platform, to the point where Nazis on Twitter is a running joke. If tweets that appeal to white supremacists on the platform are allowed to remain up, the reason doesn’t matter. It will only serve to further embolden them.
For social media platforms to make any sort of meaningful progress, they need to take firm stances and stop fearing the backlash of those who take part in spreading hate speech online, whether that’s conservatives or any other group of people.
Hate online cannot be separated from what we are seeing throughout the rest of the country. Social media platforms regularly serve as places where white supremacists can seek comfort, solidarity, and plan their next steps. That is what social media platforms need to admit and combat.