Not so long ago, mastering the ancient Chinese game of Go was beyond the reach of artificial intelligence. But then AlphaGo, Google DeepMind's AI player, started to leave even the best human opponents in the dust. Yet even this world-beating AI needed humans to learn from. Then, on Wednesday, DeepMind's new version ditched people altogether.
AlphaGo Zero has surpassed its predecessor's abilities, bypassing AI's traditional method of learning games, which involves watching thousands of hours of human play. Instead, it simply starts playing at random, honing its skills by repeatedly playing against itself. Three days and 4.9 million such games later, the result is the world's best Go-playing AI.
"It's more powerful than previous approaches because we've removed the constraints of human knowledge," says David Silver, the lead researcher for AlphaGo.