Google’s DeepMind says it has made another big advance in artificial intelligence by getting a machine to master the Chinese game of Go without help from human players.
Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, AlphaGo Zero won 100 to 0.
The newest version of its Go-playing algorithm was not only better than the original AlphaGo, but it had taught itself how to play the game. All on its own, given only the basic rules of the game.
But DeepMind’s developments go beyond just playing a board game exceedingly well. There are important implications that could positively impact AI in the near future.
“By not using human data—by not using human expertise in any fashion—we’ve actually removed the constraints of human knowledge,” AlphaGo Zero’s lead programmer, David Silver, said at a press conference.
The findings suggested that AI based on reinforcement learning performed better than those that rely on human expertise, Satinder Singh of the University of Michigan said.
“However, this is not the beginning of any end because AlphaGo Zero, like all other successful AI so far, is extremely limited in what it knows and in what it can do compared with humans and even other animals,” he said.
AlphaGo Zero’s ability to learn on its own “might appear creepily autonomous”, added Anders Sandberg of the Future of Humanity Institute at Oxford University.
As for AlphaGo, the future is wide open. Go is sufficiently complex that there’s no telling how good a self-starting computer program can get; and AlphaGo now has a learning method to match the expansiveness of the game it was bred to play.