
News Link • Robots and Artificial Intelligence
New Leak Uncovers AI Assisted Mass Censorship on Facebook
• Activist PostWhistleblowers at Meta's Integrity Organization have shared data with International Corruption Watch (ICW), revealing evidence of a mass censorship strategy abusing Meta's reporting system.
Normal Takedown Process
1. User Reports—Any Facebook user can flag a post.
2. AI Screening—The post is first checked by a content enforcement AI model that reviews the text and associated media. If the model is confident, it will remove the post.
3. Human Review—If the model is not confident, the report will be escalated to a human reviewer.
4. Training Loop—If a human approves the takedown, the post is labeled as a piece of training data and fed back to the AI's training dataset, allowing for the model to adapt in real-time.
Meta's public description of this pipeline is available at https://transparency.meta.com/enforcement/detecting-violations/how-enforcement-technology-works.
Prioritizing Government Requests
Governments and privileged entities have special access to submit takedown requests. These requests are given priority and are sent directly to human reviewers. They can be submitted by form or direct email to Meta depending on which country is involved.