Earlier this month it came out that among Facebook's myriad algorithmically induced advertising categories was an entry for users whom the platform's data mining systems believed might be interested in treason against their government. The label had been applied to more than 65,000 Russian citizens, placing them at grave risk should their government discover the label. Similarly, the platform's algorithms silently observe its two billion users' actions and words, estimating which users it believes may be homosexual and quietly placing a label on their account recording that estimate. What happens when governments begin using these labels to surveil, harass, detain and even execute their citizens based on the labels produced by an American company's black box algorithms?
One of the challenges with the vast automated machine that is Facebook's advertising engine is that its sheer scale and scope means it could never possibly be completely subject to human oversight. Instead, it hums along in silence, quietly watching the platform's two billion users as Big Brother, silently assigning labels to them indicating its estimates of everything from their routine commercial interests to the most sensitive and intimate elements of their personality, beliefs and medical conditions that could be used by their governments to manipulate, arrest or execute them.