|Description:||In the article below, the researchers optimize the configurations of Deep neural networks, and they throw away bad configurations by extrapolating the learning curves of DNN. The goal of this project is to assess whether and how a similar principle can be applied to black-box search algorithms (local search, evolutionary algorithms, etc.) by observing and predicting their convergence curves. The study can assess various models for algorithm performance prediction, e.g. a parametric exponential model, or a non-parametric model based on resampling the past algorithm improvements.