[ Wired ] Cade Metz:
“AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving,” says DeepMind researcher David Silver.
According to Silver, this allowed AlphaGo to top other Go-playing AI systems, including Crazystone. Then the researchers fed the results into a second neural network. Grabbing the moves suggested by the first, it uses many of the same techniques to look ahead to the result of each move. This is similar to what older systems like Deep Blue would do with chess, except that the system is learning as it goes along, as it analyzes more data—not exploring every possible outcome through brute force. In this way, AlphaGo learned to beat not only existing AI programs but a top human as well.
AlphaGo beat European Go champion Fan Hui five out of five times. Doesn’t sound like a big deal. Deep Blue beat chess champion Garry Kasparov. Watson beat Jeopardy champions. And now AlphaGo beat a European Go champion. So? Well, think about this: the possible moves on a Go board is more than all the atoms in the universe. In other words, there’s a lot — way more than chess — to compute.
In March AlphaGo will challenge South Korea-based grandmaster Sedol Lee who is the current world champion and considered the best Go player in the world.