This repository contains an improvement for any covariance-matix-adaptation-like evolution strategy exploiting gradient or its estimation.
For a wide variety of problems, the gradient is the most frequently used information to find optimal solutions. While CMA-like algorithms are state-of-the-art in evolutionary strategies, they can be enhanced by knowing the best direction to take (using gradient information). This helps to better exploit faster convergence and prevents getting stuck in local minima, thanks to the exploratory properties of CMA
This algorithm was tested with the new forward-forward approach to train a neural network with promising results.
ff_mnist_test.ipynb
: test on the MNIST datasetff_cifar_test.ipynb
: test on the CIFAR10 dataset
rl_cart_pole_test.ipynb
: test on the cart pole problemrl_mountain_test.ipynb
: test on the mountain car continuous
This algorithm was tested against the most popular benchmark function in the file benchmarks_test.ipynb
The file utility/es.py
contains the implementation of the original LMMA-ES algorithm proposed in this paper and the new variant.