An independent oversight body that reviews content moderation decisions at Meta has suggested that the company revise its cross-check program, and the company has agreed — sort of.
Overall, The Oversight Board, the “independent body” that reviews Meta’s content moderation decisions, has issued 32 proposals to change the program that places content from “high quality” users in a moderation queue separate from the automated queue that the company uses standards. Rather than being removed, flagged content from select public figures such as politicians, celebrities and athletes will be left open “pending further human review”.
The board review came in direct response to a 2021 Wall Street Journal Article(Opens in a new tab) who examined the exempts. in her decision(Opens in a new tab) The board recognized the inherent challenges of moderating content at scale, saying that while “a content review system should treat all users fairly,” the program struggles with “broader challenges of moderating immense amounts of content.”
Content moderation changes the way we speak – and determines who is heard
For example, they said that at the time of the request, Meta was running such a high volume of daily moderation attempts — around 100 million — that even “99% accuracy would result in a million errors a day.
Still, the board says the cross-check program has less to do with “advanc[ing] Meta’s Human Rights Commitments” and “more directly structured to address business concerns.”
Of the 32 proposals the board put forward to amend the cross-check program, Meta agreed to implement 11, partially implement 15, further study the feasibility of one, and not pursue the remaining five. In an updated blog post(Opens in a new tab) Released on Friday, the company said it would make the program “more transparent through regular reporting” and refine the criteria for participation in the program to “better consider human rights interests and justice.” The company will also update operational systems to reduce the backlog of review requests, meaning malicious content will be reviewed and removed faster.
All 32 recommendations can be accessed via this link.(Opens in a new tab)
The tweet may have been deleted
(opens in a new tab)
(Opens in a new tab)
The Board stated in its Twitter thread(Opens in a new tab) that the changes “could make Meta’s approach to error prevention more fair, credible and legitimate” but that “several aspects of Meta’s response have not gone as far as we recommended in order to achieve a more transparent and fair system”.