The LearningImpl project is using Neuroph Java framework in order to apply supervised machine learning and tune the Bagatur’s evaluation function. It is using Multilayer perceptron (MLP) feedforward artificial neural network. It has one layer with many features as the evaluation function becomes too slow to calculate with more layers and cannot compensate the better quality achieved by more layers.
How to run
If you want to run one of the supervised learning main classes first you have to generate training chess positions with evaluations using the UCITracker project, which saves these positions into a file. For that purpose you need a strong chess engine like Stockfish, Komodo, Houdini, Rybka or other. Than use the UCITracker to run self-play games of this engine and track down the positions played and their evaluations so later a supervised learning can take place.
- Main classes, which iterate the training set. They are for three different networks, which have different features.
- Position visitors, which iterate the positions and apply the learning with training sets. They also print the current accuracy.
- Utility classes, which create the Multilayer perceptron and fill the initial input signals.
- DeepLearningVisitorImpl_PST.java is optimizing the piece square tables (PST) only. It leads to weaker version but still playing good chess. The filling of the network inputs could be found in NeuralNetworkUtils_PST.java
- DeepLearningVisitorImpl_AllFeatures.java is optimizing a lot of features like material, king safety, pieces mobility, double bishops, knight outpost, hunged pieces, castling, pawn structure (doubled pawns, isolated pawns, backward pawns, supported pawns, passed pawns, passed pawns candidates, unstoppable passers, etc.) and many others. The filling of the network inputs could be found in Bagatur_ALL_SignalFiller_InArray.java and for even more details look at the Bagatur_ALL_SignalFiller.java