Facebook Has Difficulty Detecting and Moderating Hate Speech and Misinformation in Myanmar

The Associated Press viewed internal documents by Facebook only to find out that Facebook still has problems detecting and moderating hate speech and misinformation onits platform in the southeast Asian nation of Myanmar. This is still the case even years after coming under srutiny for contributing to ethnic and religious violence in Myanmar.

Scrolling through Facebook today, it’s not hard to find posts threatening murder and rape in Myanmar.

Despite the ongoing issues, Facebook saw its operations in Myanmar as both a model to export around the world and an evolving and caustic case. Documents reviewed by AP show Myanmar became a testing ground for new content moderation technology, with the social media giant trialing ways to automate the detection of hate speech and misinformation with varying levels of success.

Facebook’s internal discussions on Myanmar were revealed in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions received by Congress were obtained by a consortium of news organizations, including The Associated Press.

Facebook has had a shorter but more volatile history in Myanmar than in most countries. After decades of censorship under military rule, Myanmar was connected to the internet in 2000. Shortly afterward, Facebook paired with telecom providers in the country, allowing customers to use the platform without needing to pay for the data, which was still expensive at the time. Use of the platform exploded. For many in Myanmar, Facebook became the internet itself.

“Facebook’s approach in Myanmar today is fundamentally different from what it was in 2017, and allegations that we have not invested in safety and security in the country are wrong,” Frankel the Facebook spokesperson said.