Google AI wins 3 in a row against human Go champion in historic match



Ap_150060349683
Lee Sedol reviews the third match against AlphaGo, in Seoul on March 12, 2016.
The unthinkable has happened: Google's AlphaGo computer program has easily dispatched multiple world champion Lee Sedol in a Go exhibition match, winning 3:0 (out of possible five games).

At a post-match news conference, Sedol apologized for not showing a better result. "I kind of felt powerless," he said.
He said that in the first match, he misjudged the capabilities of AlphaGo. The second match would have been make-or-break, but he missed opportunities. As for the third match, he said he never felt as much pressure. Lee ended up resigning after 176 moves.
"I was incapable of overcoming this amount of pressure," he said.
"We are a bit stunned and speechless,"DeepMind CEO Demis Hassabis said.

The landslide victory is an important — and largely unexpected — milestone for artificial intelligence. A few years ago, it was thought that AI will never be able to master Go as it did chess, due to the sheer number of calculations involved (more than there are atoms in the universe, according to Google).
Then, in January, AlphaGo, a project of Google's AI company DeepMind, won against European champ Fan Hui, proving for the first time that AI has what it takes to go against professional players. 

Winning against the very best

The win against Sedol, however, is something different. The multiple world champion, one of the most dominant players of the last decade, and currently ranked fourth in the world, thought he would win easily, only giving a slight possibility for AlphaGo to win one of the five games. 
According to Yoo Changhyuk, a 9-dan Go master commenting on the match, Sedol tried a number of strategies against AlphaGo. “During the first match, Lee Sedol made difficult moves to agitate AlphaGo, but failed to do so. Today, he tried the opposite — he played safe and entered the endgame," he said after the second match. 
In the end, nothing worked against the powerful AI, whose win still does not make it world champion, but it definitely ranks it up there with the best of the best. 
Even though he lost three in a row, Sedol still has a chance to save face, as all five matches will be played to determine the final score.

Why is this win so important? 

Ever since IBM's chess-playing computer Deep Blue started regularly beating top players in the late 1990s, we've gotten used to artificial intelligence simply being better at certain types of challenges than humans. This is partly due to improvements in computing power, with computers being able to calculate millions of moves in advance within seconds, and partly due to the use of heuristic problem-solving methods in chess.
Go is different; its rules, in which the players of the black and white sides compete to capture each others' pegs, are simpler, but the board is bigger (a 19x19 grid), and the number of combinations is too big for any current computer to calculate with brute force.
AlphaGo — explained in detail in a January paper in scientific journal Nature — uses neural networks to mimic expert Go players, and is able to learn from games it plays against itself. It also uses tree search algorithms to simulate thousands of random games of self-play.
In easily understandable terms, chess is more streamlined. It starts from the exact same configuration each time; moves are relatively restricted, especially for some pieces, like the pawn, and losing a figure, in most cases, is a bad thing. In Go, you constantly have to evaluate all the possibilities. Pegs can be placed on all the empty spaces on the board, and it's often hard to say whether a move was good or bad, as there are too many options down the road to be considered.
"In AlphaGo there are two different neural networks," explains main DeepMind programmer David Silver in a YouTube video, posted by Nature in January. The "policy network" is used to suggest promising moves for each play, greatly reducing the number of possibilities AlphaGo needs to consider. The "value network" is then used to reduce the depth of the search. "Instead of having this enormously deep search that has to go all the way down to perhaps 300 moves, all the way down to the end of the game, what we do is we search to some modest depth, of perhaps 20 moves, and then we evaluate that position, without playing all the way to the end of the game," Silver explains.
"The search process itself is not based on brute force; it's based on something more akin to imagination," he says.



Real-world applications of AlphaGo's success are many. DeepMind's Hassabis calls it a "stepping stone towards building a general artificial intelligence." And Silver provides the example of medicine, in which an AI could one day use similar techniques to find "which sequences of treatments leads to best outcomes."

Drop Your comments Below:



Read Also:
---------------------------------------------------------------





7.      Giant future iPhone 7
 

No comments:

Post a Comment