You post your update, scan your feed, and maybe like a few things. But behind the screen lies an immensely complex set of algorithms that determine what you see. And with nearly 1.9 billion users worldwide, some of that content inevitably includes violent images and extremist rhetoric. So Facebook is making its black box a bit less opaque outlining the tools it will use to deradicalize itself.
Beyond the stuff you'd expect—work closely with law enforcement, consult terrorism experts, improve content moderation—the report confirms that Facebook is using artificial intelligence to ferret out extremism. "We want Facebook to be a hostile place for terrorists," two company execs said in a post, the first in a series called "Hard Questions."
The report, by counterterrorism head Brian Fishman and global policy manager Monika Bikert, illustrates the challenges inherent in containing extremism, and shows that Facebook is still playing catchup. Still, counterterrorism experts praised the report and said it makes clear that Facebook finally takes the problem seriously.