n a pivotal legal decision, a Kenyan High Court has ruled that Meta, the parent company of Facebook, can be sued in Kenya over allegations that its platform played a role in fueling ethnic violence during Ethiopia’s Tigray conflict between 2020 and 2022.
The case, brought forward by the Katiba Institute alongside two Ethiopian researchers, claims that Facebook’s algorithm contributed to the spread of hate speech and incitement, worsening the humanitarian crisis during the civil war in Ethiopia’s northern region.
Meta had previously challenged the suit, arguing that because it is not formally registered in Kenya, local courts lacked jurisdiction. However, the High Court dismissed this argument, asserting that the gravity of the allegations and their local impact warranted judicial review.
“The court has taken a bold step in acknowledging its responsibility to address issues that, while global in nature, have direct consequences for people in Kenya and neighboring regions,” said Nora Mbagathi, Executive Director of the Katiba Institute.
One of the plaintiffs, Abrham Meareg, alleges that his father was targeted and later killed after violent posts appeared about him on Facebook. Another, human rights researcher Fisseha Tekle, claims he was subjected to coordinated online attacks due to his advocacy work.
The suit calls for Meta to establish a compensation fund for victims of hate and violence, and to revise its algorithm to prevent it from promoting harmful content.
Meta has not yet issued a response to the court’s ruling. The company has previously stated that it has made substantial investments in content moderation and has taken action against harmful content on its platforms.
This case is not the only legal challenge Meta faces in Kenya. The tech giant is also being sued by former content moderators who allege unfair labor practices, including poor working conditions and retaliation for attempting to unionize.
These developments follow Meta’s January 2025 decision to wind down parts of its content moderation operations, including the termination of its U.S.-based fact-checking program. The company also announced it would no longer proactively search for harmful content, instead relying primarily on user reports.
As the case proceeds, it could set a global precedent for how tech companies are held accountable for the real-world impact of their platforms in jurisdictions where they may not be directly registered.