The contest to find a better method of rating chess players (see my earlier post) has come to an end. It was won by Yannis Sismanis, who utilised a method called stochastic gradient descent. The winner has published a short paper giving the technical details of his method.
Does this mean he has developed a better ratings 'system'? Based on my reading of the paper, I would say not. Simply because the conditions of the contest involved utilising a large set of training data which was then tested against a smaller set of test data. However the training data covered a number of rating periods (ie not just results from the previous rating period), while most ratings system simply use the results from tournaments played in the previous 2,3 or 4 months. (In fact his method had a discounting factor for older games in the data set).
However it does indicate possible improvements in how ratings are calculated. In fact there is now a follow up contest to look at this is issue, once again run by Kaggle. The conditions are slightly more restricted than the previous one, but the winner picks up $10,000 and a trip to Athens to present their system to FIDE.