Wednesday, 23 September 2015

A computer learns chess

A new chess program by the name of Giraffe is causing a bit of a buzz at the moment, due to the fact it seemingly taught itself to play chess at the IM level. Lots of "what hope is there for humanity" headlines have accompanied this story, although it isn't the spectacular breakthrough that some writers think it is.
Certainly the notion that a chess engine can improve its own play is an intriguing one, but isn't entirely new. The Knightcap chess engine was doing this 10 years ago, although there are some differences between knightcap and giraffe. The major one is that Knightcap had its evaluation features coded into the program, and then learned how important each feature was, based on competition play on FICS. Some feature were weighted more heavily than others, depending on how they contributed to the final result.
In Giraffes case it had significantly less features coded (although it include material, piece square tables and king safety), and then tried to discover new features, based on using large learning sets of data. The position were initially scored by Stockfish (another engine) and the goal was to get as many matching evaluations as possible. On its own this isn't anything special, but then Giraffe used this information to help it in searching for moves. Rather than just use a normal Alpha-Beta style search (which depends on move ordering for efficiency), Giraffe used a probabilistic search, where nodes are investigated based on the likelihood they are part of the principal variation (ie the main thread of the game). Assuming that the position isn't overwhelmed by tactics (where material is just going to dominate), such an approach should result in more 'human like' moves, rather than mysterious computer moves. It is this feature which is exciting, although whether it is enough to overcome brute force searchers remains to be seen.
It is worth pointing out that attempts to create a program that learns chess (rules and all) just from studying games haven't succeeded as yet. Even attempts to self learn all the weights for material evaluation (starting from 0) have also run into difficulties, as these values dominate everything else in the eval function (and without accurate numbers, engines barely get off the ground)
You can check out the paper on Giraffe here, and see for yourself how it all works, noting that Knightcap does get plenty of mentions in the paper.

3 comments:

Ian Rout said...

I can't say I'm entirely convinced that humanity needs to be alarmed yet. The end product of all this self-learning is a program 900 points weaker than Stockfish, and even that's dubious since it seems to be based on assessing individual moves found at blitz speeds, rather than whole games where one inferior move can ruin the work of ten good moves.

Of course every world domination starts with a single step, but on the other hand if further refinements simply lead to Giraffe asymptotically approaching Stockfish it won't be a big deal for chess. What it can achieve or lead to in other fields may be another question. I'd be interested to read Shaun's views on its potential.

I like the anthropomorphic "it is likely that it knows about patterns that have not yet been studied by humans". In that case we should see Giraffe beating GMs.

Does Giraffe play online anywhere? A few thousand internet junkies would give it a good workout. (I see it's mentioned as being tested on ICC but without any mention of results or analysis).

Su Pang said...

Well, Giraffe is dead. The programmer got bought out by Google DeepMind, and it's a conflict to continue work on it. Probably for the best, as he really didn't seem to have a grasp on AI issues, ending his post with the bizarro-world claim that if computers had to search at 5 nodes per second, then humans would still be better. Great, let's make computers inherit human failings? No, let computers do their own thing and better, rather than attempt to imitate humans in learning. His thesis had similar grotesque comparisons (Table 3), even allowing a 10x speedup for his own schema it's still heaps behind Stockfish 5 in "positional understanding" (whatever that really means). Deliberately ignoring efficiency in evaluation function quality, well, what's the use? And who cares about STS scores, when Elo is what matters at the day's end? His claims that "many of [the other evaluation functions] have been worked on by grandmasters" is rubbish (Komodo was not included), as is his tall tale about the test underestimating Giraffe because "the test suite is famous" and so others specifically tune for it. I realise he's only a master's student, but some higher scepticism is warranted, and his advisor should admonish him. (Further, the "Approx rating" column of Table 3 is not calibrated to 0.1 sec thinking times but comes from CCRL 40/40, etc., etc.,)

Anonymous said...

Well renowned Sergei Markoff largely agrees, the paper is awful. http://talkchess.com/forum/viewtopic.php?t=59292

Is it only me who's being driven mad by reading this paper? Smile

Most enraging quotations: "...while computers have become faster, the way chess engines work has not changed. Their power relies on brute force, the process of searching through all possible future moves to find the best next one"

"In a world first, a machine plays chess by evaluating the board rather than using brute force to work out every possible move"

And so on.

When someone says: "Let we limit computer speed to only 5 nps and will see...", I want to reply: "Let we limit your brain size to only 1000 neurons and then will see!"