Oversight board was critical of Meta’s Moderation


The Meta Oversight Board Case: a Human-Informed, Automated Panel to Review Content Around the Israel-Hamas War

Meta is hardly the only social media giant being scrutinized for its handling of content surrounding the Israel-Hamas war. Verified users on X (formerly Twitter) have been accused of being “misinformation super-spreaders” by misinformation watchdog organization NewsGuard. TikTok and YouTube are also being scrutinized under the EU’s Digital Services Act following a reported surge of illegal content and disinformation on the platforms, and the EU has opened a formal investigation into X. The Oversight Board case, by contrast, highlights the risks of overmoderation — and the tricky line platforms have to walk.

Both videos were removed due to changes to these automated systems to make them more sensitive to any content coming out of Israel and Gaza that might violate Meta’s policies. This means that the systems were more likely to mistakenly remove content that should otherwise have remained up. And these decisions can have real-world implications.

The board says that, in the case of the latter video, both the removal and a rejection of the user’s appeal to restore the footage were conducted by Meta’s automated moderation tools, without any human review. The board took up a review of the decision on an “accelerated timeline of 12 days,” and after the case was taken up, the videos were restored with a content warning screen.

The company acknowledging that the posts were meant to raise awareness was criticized for demoting the two posts with warning screens, preventing them from appearing as recommended content to other Facebook andInstagram users. Meta has since responded to the board’s decision to overturn the removals, saying that because no recommendations were provided by the panel, there will be no further updates to the case.

“We as the board have recommended certain steps, including creating a crisis protocol center, in past decisions,” Michael McConnell, a cochair of the Oversight Board, told WIRED. The use of automation will remain. But my hope would be to provide human intervention strategically at the points where mistakes are most often made by the automated systems, and [that] are of particular importance due to the heightened public interest and information surrounding the conflicts.”

Meta wrote in a company post that they welcome the decision of the Oversight Board. “Both expression and safety are important to us and the people who use our services.”