In the 1960s, a series of man-made disasters–from oil spills to rivers literally catching fire–enraged Americans and helped to spur politicians to create environmental laws, including the requirement that federal agencies prepare Environmental Impact Reports on the effects of any new proposed construction projects. It seems like common sense now, but it took many decades (and irreparable damage) to get the government to create these regulations.
Where's the metaphorical burning river for algorithms? Perhaps the revelation that predictive policing software is deeply biased against people of color. Or outrage over the use of predictive algorithms to evaluate teachers. Or maybe it'll be something way more pedestrian, like Amazon pushing its own products instead of the cheapest. Either way, we're nearing a moment of reckoning with how AI is regulated by the government, and it's going to be a long road toward reasonable, working legislation on it.
This week the AI Now Institute, a leading group studying the topic, published its own proposal. It's called an "Algorithmic Impact Assessment," or AIA, and it's essentially an environmental impact report for automated software used by governments. "A similar process should take place before an agency deploys a new, high-impact automated decision system," the group writes.
[Photo: Bettmann/Getty Images]
An AIA would do four basic things, AI Now explains: First, it would require any government agency that wants to use an algorithm to publish a description of the system and its potential impact. Second, agencies would give external researchers access to the system so they can study it. Third, it would require agencies to publish an evaluation of how the algorithm will affect the public and how it plans to address any biases or problems. And lastly, an AIA would require the agency to create a system for regular people to hold agencies accountable when they fail to disclose important pieces of information about an algorithm.