Facebook’s content moderation failures in Ethiopia

Misinformation and hate speech posted on social media amplifies and exacerbates social tensions and can lead to violence in the real world. The problem is particularly pronounced in small, developing and already fragile states like Ethiopia and Burma, where Facebook propaganda has been linked to the genocide of the Rohingya people.

Facebook knows this is a problem, but it has barely adjusted its content moderation strategies in smaller countries struggling with conflict and ethnic divisions. The 2021 »Facebook filesleak demonstrates this by documenting Facebook’s repeated content moderation failures in Ethiopia.

Safer:

Technology and innovation

Ethiopia

Social problems

Robots and artificial intelligence

Social media has served as a lightning rod for ethnic conflict in Ethiopia, especially as the civil war has escalated. Some comments posted on Facebook border on incitement to genocide. Dejene Assefa, an activist with over 120,000 followers, called on patriots to take up arms against ethnic Tigrayans in October 2021, writes that“The war is with those you grew up with, your neighbour. If you can rid your forest of these thorns…victory will be yours. His post was shared over nine hundred times before it was flagged and deleted Assefa’s words can still be found in posts on Facebook.

Calls for violence on social media during the Ethiopian civil war have been attributed to violence in the real world. The first major flashpoint came with the 2020 assassination of Hachalu Hundessa, a prominent singer who advocated for better treatment of the Oromo ethnic group. The riots after his murder were “supercharged by the almost instantaneous and widespread sharing of hate speech and incitement to violence on Facebookand killed at least 150 Ethiopians.

The riots following Hachalu Hundessa’s assassination prompted Facebook to translate its community standards into Amharic and Oromo for the first time. However, he only referred to Ethiopia as “country at risk” in 2021, when the civil war had already begun. According to Facebook, to Countries at risk are characterized by growing social divisions and the threat that Facebook discourse will turn into violence. The three countries most at risk – the United States, India and Brazil – each have “war rooms” where Facebook teams continuously monitor network activity. However, not all countries at risk have such good resources.

Facebook’s failures in Ethiopia are a symptom of a deep geographic and linguistic inequality in the resources devoted to content moderation. Facebook supports posts in 110 languages, however, it only has the ability to review content in 70 languages. According to Facebook whistleblower Frances Haugen, Facebook adds a new language to its content moderation program “usually in crisis conditions”, and it can take up to a year to develop the most minimal moderation systems. At the end of 2021, Facebook still lacked misinformation and hate speech classifiers in Ethiopia, critical resources deployed in other countries at risk. The company has partnered with two content moderation organizations in Ethiopia: PesaCheck and AFP Fact Check. Combined, these companies have only five employees dedicated to digitizing the content published by the Ethiopian company seven million Facebook users.

The five Ethiopians working as content moderators handle a small percentage of posts flagged as problematic. To increase human moderation, Facebook uses artificial intelligence (AI) “network-based moderation” system to filter the majority of content. Uses of network-based moderation pattern recognition algorithms to report messages based on previously identified objectionable content. This AI system was developed in part because it doesn’t require a deep understanding of a message’s language and can theoretically be deployed in contexts where Facebook lacks the linguistic capacity to perform full human moderation. . Internal communication leaked Facebook files show that network-based moderation is still experimental. It’s also opaque: there are few public details about how the company understands and models patterns of malicious behavior, and how successful the approach has been so far. Despite these problems, Facebook employees pushed to apply network-based moderation approaches to smaller, at-risk countries where it had little language capacity, such as Ethiopia.

Safer:

Technology and innovation

Ethiopia

Social problems

Robots and artificial intelligence

If network-based AI moderation is to be successful in filtering out hate speech and misinformation, it must work in tandem with a team of well-resourced content moderators with linguistic and cultural expertise in a country. AI built without a clear understanding of the local language or culture it is moderating cannot grasp all the intricacies of a language. Facebook rolled out its network-based moderation system in the belief that it would be able to completely replace human moderation.

The lessons of the tragedy in Ethiopia are clear: Facebook must proactively identify countries at risk and set up content moderation systems there based on the linguistic and cultural knowledge of local employees. It requires directing more resources towards content moderation in the developing world; hire more moderators with expertise in regional languages; and partnering with local organizations to better assess threat levels and identify problematic content. Facebook cannot continue to rely on global systems to replace local human moderators. The human cost of continuing to undersupply smaller, fragile, non-English-speaking countries is too high.

Caroline Allen is editor of the Brown Journal of World Affairs.

Comments are closed.