Facebook has removed about 30 million hate speech and coronavirus-related cure posts from its platform.
The social networking site trashed 22.5 million hate speech posts in the second quarter, more than double compared to the first quarter this year, according to data from the company’s community standards enforcement report.
It discarded 3.3 million hate speech content in Instagram in the second quarter, a four-fold increase from the previous quarter. The company also removed 7 million posts promoting COVID-19 misinformation including fake cures and preventive measures.
The California-based company credited its rise in identifying hate speech to improvement in its English detection technology, and expanding some of its automation technology to other languages like Spanish, Arabic and Indonesian.
Facebook also updated its policies on hate speech, and banned content depicting blackface and stereotypes about Jewish people controlling the world.
“We’re launching a Diversity Advisory Council that will provide input based on lived experience on a variety of topics and issues,” said Guy Rosen, VP Integrity.
The company has also created Instagram Equity Team and Facebook Inclusive Product Council to build products that are fair and inclusive.
Since October 2019, it has removed 23 different organizations from its platform. Of these, over half of them supported white supremacy.
Technology improvement and return of some of its content reviewers also helped the company take down 8.7 million posts related to terrorism, increasing around 40% from the previous quarter.
However the count of deleting fake account reduced to 1.5 billion from 1.7 billion in the previous quarter. Improvement in detection systems to block attempts of creating fake accounts have left fewer fake accounts to be disabled, Facebook said.