13 October 2000 Review of efforts to evolve strategies to play checkers as well as human experts
Author Affiliations +
We have been experimenting with evolutionary approaches to create artifical neural networks that can play checkers at a level that is competitive with human experts. In particular, multilayer perceptrons were used as evaluation functions to compare the worth of alternative boards. The weights of these neural networks were evolved in a coevolutionary manner, with networks competing only against other extant networks in the population. No external expert system was used for comparison or evaluation. Feedback to the networks was limited to an overall point score based on the outcome of 10 games at each generation. No attempt was made to give credit to moves in isolation or to prescribe useful features beyond the possible inclusion of the piece differential. Initial results indicated that the best-evolved neural network earned a rating of 1750, placing it as a Class B player. This level of performance is competitive with many humans. More recent results have generated networks with ratings in the 1900s, in Class A, one level below expert as accepted by the American Checkers Foundation.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kumar Chellapilla, David B. Fogel, "Review of efforts to evolve strategies to play checkers as well as human experts", Proc. SPIE 4120, Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation III, (13 October 2000); doi: 10.1117/12.403634; https://doi.org/10.1117/12.403634


Fault diagnosis approach based on module fuzzy subsystems
Proceedings of SPIE (November 06 2006)
New results on evolving strategies in chess
Proceedings of SPIE (December 30 2003)
Robust linear quadratic regulation using neural network
Proceedings of SPIE (July 22 1993)
Grand challenges
Proceedings of SPIE (March 30 2000)

Back to Top