YouTube’s livestream of hate speech hearings was flooded by hate speech

YouTube's livestream of hate speech hearings was flooded by hate speech

In the wake of the deadly mass shooting that took place in Christchurch, New Zealand by an alleged white nationalist, the US House Judiciary Committee held a hearing about hat speech online. If you’ll recall the gunman in the Christchurch shooting that left 50 victims dead not only livestreamed his attack on Facebook but also posted a hate-filled manifesto online. Facebook and Google were called before the committee to address what steps they were taking to combat the problem. Both Facebook and Google defended the practices they use in order to prevent online hate. However, Google, who owns YouTube, may have spoken too soon.

The hearing was being livestreamed on YouTube and about 30 minutes into the livesstream many YouTube users were leaving racist and anti-Semitic comments in the stream’s live chat and comment section. At that point, YouTube shut down the chat and closed the comments section but by then the damage had already been done. Committee chairman Rep. Jerrold Nadler, D-NY, was handed a sampling of the hateful comments and read them aloud during the hearing. “This just illustrates part of the problem we’re dealing with,” Nadler said.

[youtube https://www.youtube.com/watch?v=QwtMs2E-5zw%5D

Unfortunately, just because YouTube clamped down on one livestream doesn’t mean the hate speech went away. Instead, they just relocated to other livestreams of the hearing. At least one recognized hate group ran their own livestream of the hearing and even raised money for themselves through YouTube’s own platform. Because of social media, there hasn’t been a surge in hate groups like this since the days of the civil rights movement. According to the Southern Poverty Law Center, there are over 1,000 organized hate groups in the US alone. Violence committed by some of these groups has also been on a sharp rise in the past few years as well.

What remains to be seen is if these social media platforms can actually develop effective safeguards to screen for hate speech or will it just remain business as usual? Hate speech has been a problem since the early days of the internet and no major platform has been able to tackle it convincingly.