Benchmark-Driven Configuration of a Parallel Model-Based Optimization Algorithm

IEEE Transactions on Evolutionary Computation(2022)

引用 2|浏览19
暂无评分
摘要
This paper introduces a benchmarking framework that allows rigorous evaluation of parallel model-based optimizers for expensive functions. The framework establishes a relationship between estimated costs of parallel function evaluations (on real-world problems) to known sets of test functions. Such real-world problems are not always readily available (e.g., confidentiality, proprietary software). Therefore, new test problems are created by Gaussian process simulation. The proposed framework is applied in an extensive benchmark study to compare multiple state-of-the-art parallel optimizers with a novel model-based algorithm, which combines ideas of an explorative search for global model quality with parallel local searches to increase function exploitation. The benchmarking framework is used to configure good batch size setups for parallel algorithms systematically based on landscape properties. Furthermore, we introduce a proof-of-concept for a novel automatic batch size configuration. The predictive quality of the batch size configuration is evaluated on a large set of test functions and the functions generated by Gaussian process simulation. The introduced algorithm outperforms multiple state-of-the-art optimizers, especially on multi-modal problems. Additionally, it proves to be particularly robust over various problem landscapes, and performs well with all tested batch sizes. Consequently, this makes it well-suited for black-box kinds of problems.
更多
查看译文
关键词
Benchmarking,exploratory landscape analysis (ELA),model-based optimization,parallelization,simulation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要