We sistematically submit our solver against a wide range of optimization problems, test functions, and other optimization algorithms in order to grant the best performance of our solution.
To achieve this we have built a benchmark framework, based on the famous optimization framework COCO (http://coco.gforge.inria.fr/doku.php) which has a constantly updated repository here: https://github.com/numbbo/coco .
Benchmarking different algorithms is not a straightforward task, also because third-party libraries typically have tuning parameters that the user needs to tune (we don’t have!) and that can affect the performance result. Also condensing results coming from very different test functions is often limiting.
Below here we report the results of our benchmark campaign.
This is a comprehensive result coming from testing our algorithm against other common optimization techniques over more than 20 test functions, spanning from low to medium high dimensions (2-40) and very strict to large budgets (10-200).
By problem complexity:
A typical classification for problem complexity is the ratio between the number of calls to the black-box model that are performed (alias budget) over the number of inputs of a given task (alias dimensions). By considering the most of engineering problems we can classify them among ‘hard’ (3/5 over a 5 grade scale) and ‘very hard’ (4/5 over a 5 grade scale), considering roughly budgets that are at the same magnitude order of the number of dimensions. For these problems, genetic and evolutionary algorithms typically do not perform well because of the large number of calls to the black-box function to generate the population.