• Welcome to Tamil Brahmins forums.

    You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our Free Brahmin Community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today!

    If you have any problems with the registration process or your account login, please contact contact us.

Google’s AI Wins First Game in Historic Match With Go Champion

Status
Not open for further replies.

Lalit

Active member
Wow! Wow! Wow! Exciting!
[h=1]Google’s AI Wins First Game in Historic Match With Go Champion[/h]SEOUL, SOUTH KOREA — After an extraordinarily close contest, Google’s artificially intelligent Go-playing computer system has beaten Lee Sedol, one of the world’s top players, in the first game of their historic five-game match at Seoul’s Four Seasons hotel. Known as AlphaGo, this Google creation not only proved it can compete with the game’s best, but also showed off its remarkable ability to learn the game on its own.
A group of Google researchers spent the last two years building AlphaGo at an AI lab in London called DeepMind. Until recently, experts assumed that another ten years would pass before a machine could beat one of the top human players at Go, a game that is exponentially more complex than chess and requires, at least among the top humans, a certain degree of intuition. But DeepMind accelerated the progress of computer Go using two complimentary forms of machine learning—techniques that allow machines to learn certain tasks by analyzing vast amounts of digital data and, in essence, practicing these tasks on their own.
The match serves as a litmus test for the progress of machine learning.
The match—which extends through next Tuesday—serves as a litmus test for the progress of machine learning. Similar AI techniques have already reinvented myriad services inside Google and other Internet giants, including the Google search engine, and they’re poised to accelerate the progress of everything from scientific research to robotics.


GW20160130435-1024x768.jpg
Click to Open Overlay GalleryLee Sedol. Geordie Wood for WIRED
This morning in Seoul, today’s match was front page news—quite literally—with the average Korean very much rooting for native son Lee Sedol. But there is just as much interest inside Google, and that includes some of its biggest names. Jeff Dean, one of the company’s most important engineers, is in Seoul for at least the first game. He delivered speech this morning for the local press on the progress of machine learning inside Google, and just afterwards, Google chairman and former CEO Eric Schmidt sat down for lunch with a handful of reporters at the Four Seasons alongside Demis Hassabis, the CEO of DeepMind. Both carried a copy of The Korean Herald, whose front page carried a photo of Hassabis and Lee Sedol—above the fold.
“I expected it to be big,” Hassabis told us. “But not that big.”
[h=3]‘Difficult Fight’[/h]Hassabis left the lunch early without taking a bite, saying he was needed as his DeepMind team made the final preparations for the match. Schmidt followed about thirty minutes later. As the match was set to begin, both turned up just outside the match room, trailed by a small mob of TV and print photographers. Apparently, two Korean senators also arrived just before this initial game. “This is a lot more attention than Go usually gets,” said one of the match’s English language commentators, Michael Redmond. And Go is enormously popular in Korea. An estimated 8 million Koreans play the game, which is played on a 19-by-19 grid with small black and white stones.


Lee Sedol and AlphaGo’s operator, DeepMind researcher Aja Huang, played the game in a small, closed room alongside a handful of officials. The press watched from two separate commentary rooms, one for Korean speakers and one for English. Sedol played black and AlphaGo white, which meant Sedol made the first move, making a fairly common opening—and one that was only slightly different from the opening played by three-time European Go Fan Hui during his closed-door match with AlphaGo this past October. AlphaGo won that match five games to nil.
According to Michael Redmond, the English language commentator and a professional Go player who was born in the US, Lee Sedol’s opening was an aggressive one. The Korean is known for his aggressive and fast-moving style of play. “He starts early in his fight,” Redmond said. But AlphaGo responded with a game of “balance”—a relatively peaceful game, as Redmond described it. This was consistent with the way the machine played European champion Fan Hui in October.
But about 12 moves into the match, AlphaGo went on the offensive as well. “Lee Sedol invited the fight,” Redmond said, “but AlphaGo did not back away from it.” And the match continued apace. Redmond said he did not see any precedent for this in the match with Fan Hui. “The fight is getting really complicated,” he said. “This is actually the first time I have seen AlphaGo play a game that has this difficult of a fight.”
[h=3]Rapid Rate of Play[/h]Redmond’s commentary was illuminating, but his view of AlphaGo also showed just how new—and indeed, how mysterious—the machine’s approach really is. Redmond kept referring to the AlphaGo “database,” but unlike past Go systems, the system relies much more on machine learning than on a pre-set list of moves. Part of the attraction of this match is that, before today’s game, no one was quite sure how well AlphaGo would perform because it has spent the last five months essentially teaching itself to play the game at a higher level.


GW20160130402-1024x768.jpg
Click to Open Overlay GalleryDemis Hassabis. Geordie Wood for WIRED
In October, though it soundly beat Fan Hui, AlphaGo was not good enough to beat someone like Lee Sedol. Fan Hui is ranked 633rd in the world, while Lee Sedol is ranked number five and widely regarded as the top player of the last decade. But over the last five months, using a technology called reinforcement learning, AlphaGo essentially played game after game again against itself as a way of improving its skills.
Clearly, the system has improved its play a great deal. At the lunch prior to the match, Hassabis also said that since October, he and his team had also used machine learning techniques to improve AlphaGo’s ability to manage time. In the early to middle part of the game, it matched Lee Sedol with a rapid rate of play. “Both of them are playing fairly quickly,” Redmond said.
[h=3]‘A Scary Variation’[/h]
Lee Sedol took an (allowed) break about an hour-and-a-half into the game as his clock continued to run. And then the match returned to what commentator Chris Garlock called “a little bit more of a ballet.” Redmond said that AlphaGo was planning very much like a human professional, trying to reinforce its weaknesses—that is, its vulnerable groups of stones. “That is a pattern it has always had—the same as a really good Go player,” he said, referring to AlphaGo’s match with Fan Hui. “That is: making strong moves to reinforce weak groups—and potentially create weak groups [for its opponent].”


Then, at the two hour mark, AlphaGo made another particularly aggressive move, and Garlock said he was nervous—for Lee Sedol. “It just looks scary,” he said. And to a certain extent, Redmond agreed. “It’s a scary variation. Black has to be careful,” he said, referring to Lee Sedol. He was also impressed that AlphaGo was avoiding mistakes of its own. During the match with Fan Hui, Redmond said, AlphaGo made a number of fundamental errors, but this did not really happen in the early to middle part of today’s game.
Twenty minutes later, Redmond said that Lee Sedol could not survive by playing “peacefully.” He needed to attack on the right side of the board. But many other parts of board were very much up for grabs. Garlock and Redmond agreed that the match was very much in the balance.
[h=3]The End Game[/h]As the two players entered the end game, at the two-hour-and-forty-minute mark, the contest remained on a knife edge. Garlock and Redmond loosely tallied the number of points available to each player in various parts of the board, deciding that the match was still too close to call. But Garlock said that this could favor AlphaGo, because its strength is in “calculation.” There is some truth to this. AlphaGo uses its machine learning techniques to narrow down the scope of potentially advantageous moves, but then it uses what’s called a tree search to examine the possible outcomes of those moves.
Regardless, the machine continued to play at an enormously high level. “It’s more than I hoped for,” Redmond said. And, yes, the two commentators continually referred to AlphaGo as “he.”
As the game approached its conclusion, AlphaGo began using more and more of its available time (each player has about 2 hours and thirty minutes to make all moves). But as his clock dropped to around 34 minutes, Lee Sedol seemed to show the first signs of frustration, turning in his chair, wincing, and putting his hand to the back of his head. Then, about six minutes later, Redmond said: “I don’t think it’s gonna be that close.”
Indeed, at the three-hour-and-thirty-minute mark, Lee Sedol resigned.

Remond called the result “a big surprise,” saying he had not expected a win for Google and AlphaGo. Of course, this was only the first of five games. The next is tomorrow at 1pm Seoul time, followed by a rest day. Game three is scheduled for Saturday. Whatever the ultimate outcome of the match, AlphaGo has proven its worth. And perhaps more importantly, it has proven that it can improve by leaps and bounds—mostly on its own. As Redmond said of AlphaGo, well before today’s match was over: “It’s already a success.”

http://www.wired.com/2016/03/googles-ai-wins-first-game-historic-match-go-champion/
 
Yes, it was "mega-tense"!

[h=1]Google's DeepMind beats Lee Se-dol again to go 2-0 up in historic Go series[/h][h=2]Human ingenuity beats human intuition again
[/h]
52

Share on Facebook (8,061) Tweet Share (208) Pin (1)
Google stunned the world by defeating Go legend Lee Se-dol yesterday, and it wasn't a fluke — AlphaGo, the AI program developed by Google's DeepMind unit, has just won the second game of a five-game Go match being held in Seoul, South Korea. AlphaGo prevailed in a gripping battle that saw Lee resign after hanging on in the final period ofbyo-yomi ("second-reading" in Japanese) overtime, which gave him fewer than 60 seconds to carry out each move.
"Yesterday I was surprised but today it's more than that — I am speechless," said Lee in the post-game press conference. "I admit that it was a very clear loss on my part. From the very beginning of the game I did not feel like there was a point that I was leading." DeepMind founder Demis Hassabis was "speechless" too. "I think it's testament to Lee Se-dol's incredible skills," he said. "We're very pleased that AlphaGo played some quite surprising and beautiful moves, according to the commentators, which was amazing to see."

ALPHAGO WAS MORE CONFIDENT THAN PROFESSIONAL PLAYERS
The close nature of the game appears to offer validation of AlphaGo's evaluative ability, the main roadblock to proficiency for previous Go programs. Hassabis says that AlphaGo was confident in victory from the midway point of the game, even though the professional commentators couldn't tell which player was ahead.
Until yesterday, the ancient Chinese board game of Go had never been played to a world-class level by an AI. Computer programs have long bested the world's leading human players of games like checkers and chess, but Go's combination of simple rules and intricate strategy has made it a major challenge for artificial intelligence research.
[h=3]READ MORE: WHY GOOGLE'S GO WIN IS SUCH A BIG DEAL[/h]DeepMind's AlphaGo program uses an advanced system based on deep neural networks and machine learning, which has now seen it beat 18-time world champion Lee twice. The series is the first time a professional 9-dan Go player has taken on a computer, with Lee competing for a $1 million prize. AlphaGo's victory today means it leads the series 2-0; Lee had predicted he'd win 5-0 or 4-1 at worst, but he now needs to come out on top in all three remaining games whereas AlphaGo could wrap up the series by Saturday.
http://www.theverge.com/2016/3/10/11191184/lee-sedol-alphago-go-deepmind-google-match-2-result/




 
Status
Not open for further replies.

Latest posts

Latest ads

Back
Top