US-based social media platform Facebook has come under renewed scrutiny for persistently allowing user activists to incite ethnic massacres in Ethiopia’s escalating war.
According to a probe by UK-based Bureau of Investigative Journalism (TBIJ) and the Observer newspaper released Sunday, Facebook continues to permit users to post content that triggers violence through hate and misinformation.
The investigation tracked down relatives who have tied Facebook posts to the killings of loved ones, pointing out that a senior member of Ethiopia’s media blamed the major corporation for “standing by and watching the country fall apart.”
The complaints came amid an intensifying focus on Facebook’s content moderation decisions, with it previously being blamed for playing a role in the ethnic persecution of Rohingya Muslims in Myanmar.
Mark Zuckerberg, the CEO of Facebook’s parent company Meta Platforms, revealed on Wednesday that former British deputy prime minister, Nick Clegg, would be president of global affairs.
He said the move aimed to help the rebranded US technology company repair its reputation following the testimony of whistleblower Frances Haugen, who insisted that Facebook was “literally fanning ethnic violence” in Ethiopia.
The development also comes as Facebook considers launching an independent inquiry into its work in Ethiopia after its oversight board urged the company to probe how the platform had been used to spread hate speech.
TBIJ and Observer investigators interviewed a number of fact-checkers, civil society organizations and human rights activists in the country, describing Facebook’s support in the investigation as far less than it should be.
Others said they felt requests for assistance had been ignored and meetings failed to materialize.
Such failures, they said, helped to incite a conflict in which thousands have died and millions been displaced since clashes broke out between government forces and armed opposition groups from the Tigray region in November 2020. Both sides have been accused of perpetrating atrocities.
Rehobot Ayalew, of the Ethiopian fact-checking initiative HaqCheck, said: “Most of the people have low media literacy, so Facebook is considered to be credible.”
"We come across [Facebook] images that are horrifying and hateful content,” Ayalew said. “You’re not getting the support from the platform itself that is allowing this kind of content. They can do more [but] they’re not doing anything.”
A number of civil society groups have similar complaints of feeling ignored and sidelined. Facebook organized a meeting with several groups in June 2020, to discuss how the platform could best regulate content before scheduled elections. As of November, two of the organizations involved said they had heard nothing about any subsequent meetings.
Haben Fecadu, a human rights activist who has worked in Ethiopia, said: “There’s really no excuse. I’ve doubted they have invested enough in their Africa content moderation.”
“The problem is not specific to Tigray. Ethiopian citizens from every corner across ethnic groups are severely affected by hateful content circulating online.”
Compounding the concern is that, according to disclosures provided to the US Congress by Haugen, Meta has known about the risks of such problems for years.
In January 2019 an internal report on “On-FB Badness” – a measure of harmful content on the platform – rated the situation in Ethiopia as “severe”, its second-highest category.
Almost a year later Ethiopia had risen to the top of Facebook’s list of countries where it needed to take action.
A presentation dated Dec. 10, 2020 evaluated the risk of societal violence in Ethiopia as “dire” – Meta’s highest threat warning and the only country to receive that ranking.
More than a year on, it is alleged that the firm has frequently ignored requests for support from fact-checkers based in the country. Some civil society organizations say they have not met with the company in 18 months.