Every couple of years or so I try and write a new computer chess program. Previous programs include Vchess (featured here in the past) and Fencer. Currently I am working on a program called Pangu, which combines the ideas I've developed in with my previous programs, with better programming ideas I've seen in Fruit etc
One thing most chess programmers do is to test their programs against a test suite of positions. Some of these suites are tactical positions (eg 1001 Brilliant Ways to Checkmate), while some are a combination of tactics and positional play. The most famous of these is the Bratko-Kopec Test, which was developed in the early 80's and was designed to measure the strength of human and computer players.
The Bratko-Kopec test contains 24 positions half of which are tactical, and half of which are based around understanding pawn levers. Two such positions are shown on the right.
Now at the time I ran the first test my program was pretty good at searching. but had very little chess knowledge. In fact all it knew was the value of pieces, and that certain squares were good for certain pieces. It new nothing about pawn structures, open files, king safety etc
Even with this lack of knowledge it did surprisingly well. It scored 16/24 (at 2 seconds per position), solving 9 of the tactical positions and 7 of the "lever" positions. Of the the diagrammed positions it failed to solve the first one (1.d5!) but it found 1. ... f5 in the second.
So I wonder about the following. Can the ability to calculate 5 or 6 moves deep (perfectly) replace the need for any deep knowledge of the game? And are there human players who already do this?