A few years there was a marked improvement in the strength of Go playing programs, with the development of the 'Monte-Carlo' search. In a Monte-Carlo search, a move tree is generated to a certain depth (using all legal moves and replies) but when evaluating the leaf nodes a fixed number of games are played to the finish (using random legal moves) and the percentage number of wins is used to score the start position.
On first inspection this looked like it should not work, but for a number of games (including Hex and Lines of Action) it produced good results. However the results when applied to chess have been less promising. I suspect that the reason for this is that in chess all it takes is one mistake to lose, and such a search routine does not overly weight losses. For example, a given position might have 24 moves that lose for player A and 1 that wins, and a random selection would only find 4% of the games winning for player A. On the other hand a human player should find the win 100% of the time, making their opponents previous move bad. For the other games, such dramatic swings are less common (due to the rules of the game) and so an advantageous position more closely matches the eventual outcome.
Of course an engine that plays randomly could theoretically play an absolute masterpiece. Assuming an average of 40 legal moves in a position, there is a 2.5% chance of choosing the best move each time. So for a 40 move game such an engine would be unbeatable every 40^40 games (which is approximately 12 followed by 63 zeros!). At 1 second per game such a game should eventually occur after 3.8x10^58 years