Optimizing with Low Budgets: a Comparison on the Black-box Optimization Benchmarking Suite and OpenAI Gym
CoRR(2023)
摘要
The growing ubiquity of machine learning (ML) has led it to enter various
areas of computer science, including black-box optimization (BBO). Recent
research is particularly concerned with Bayesian optimization (BO). BO-based
algorithms are popular in the ML community, as they are used for hyperparameter
optimization and more generally for algorithm configuration. However, their
efficiency decreases as the dimensionality of the problem and the budget of
evaluations increase. Meanwhile, derivative-free optimization methods have
evolved independently in the optimization community. Therefore, we urge to
understand whether cross-fertilization is possible between the two communities,
ML and BBO, i.e., whether algorithms that are heavily used in ML also work well
in BBO and vice versa. Comparative experiments often involve rather small
benchmarks and show visible problems in the experimental setup, such as poor
initialization of baselines, overfitting due to problem-specific setting of
hyperparameters, and low statistical significance.
With this paper, we update and extend a comparative study presented by Hutter
et al. in 2013. We compare BBO tools for ML with more classical heuristics,
first on the well-known BBOB benchmark suite from the COCO environment and then
on Direct Policy Search for OpenAI Gym, a reinforcement learning benchmark. Our
results confirm that BO-based optimizers perform well on both benchmarks when
budgets are limited, albeit with a higher computational cost, while they are
often outperformed by algorithms from other families when the evaluation budget
becomes larger. We also show that some algorithms from the BBO community
perform surprisingly well on ML tasks.
更多查看译文
关键词
Benchmarking,Black-box optimization,BBOB,OpenAI Gym,Bayesian Optimization,Reinforcement Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要