The social media company has been embroiled in a number of content moderation controversies this year, from facing international outcry after removing an iconic Vietnam War photo due to nudity, to allowing the spread of fake news on its site.
Facebook has historically relied mostly on users to report offensive posts, which are then checked by Facebook employees against company "community standards." Decisions on especially thorny content issues that might require policy changes are made by top executives at the company.
Candela told reporters that Facebook increasingly was using artificial intelligence to find offensive material. It is "an algorithm that detects nudity, violence, or any of the things that are not according to our policies," he said. The company already had been working on using automation to flag extremist video content, as Reuters reported in June.
Now the automated system also is being tested on Facebook Live, the streaming video service for users to broadcast live video.