DiBB: Distributing Black-Box Optimization

被引:1
作者
Cuccu, Giuseppe [1 ]
Rolshoven, Luca [1 ]
Vorpe, Fabien [1 ]
Cudre-Mauroux, Philippe [1 ]
Glasmachers, Tobias [2 ]
机构
[1] Univ Fribourg, Exascale Infolab, Fribourg, Switzerland
[2] Ruhr Univ Bochum, Theory ML Grp, Bochum, Germany
来源
PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'22) | 2022年
关键词
Black-Box Optimization; Distributed Algorithms; Parallelization; Evolution Strategies; Neuroevolution;
D O I
10.1145/3512290.3528764
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
DiBB (for Distributing Black-Box) is a meta-algorithm and framework that addresses the decades-old scalability issue of Black-Box Optimization (BBO), including Evolutionary Computation. Algorithmically, it does so by creating out-of-the-box a Partially Separable (PS) version of any existing black-box algorithm. This is done by leveraging expert knowledge about the task at hand to define blocks of parameters expected to have significant correlation, such as weights entering a same neuron/layer in a neuroevolution application. DiBB distributes the computation to a set of machines without further customization, while still retaining the advanced features of the underlying BBO algorithm, such as scale invariance and step-size adaptation, which are typically lost in recent distributed ES implementations. This is achieved by instantiating a separate instance of the underlying base algorithm for each block, running on a dedicated machine, with DiBB handling communication and constructing complete individuals for evaluation on the original task. DiBB's performance scales constantly with the number of parameter-blocks defined, which should allow for unprecedented applications on large clusters. Our reference implementation (Python, on GitHub and PyPI) demonstrates a 5x speed-up on COCO/BBOB using our new PS-CMA-ES. We also showcase a neuroevolution application (11 590 weights) on the PyBullet Walker2D with our new PS-LM-MA-ES.
引用
收藏
页码:341 / 349
页数:9
相关论文
共 34 条
[1]   Comparison-Based Natural Gradient Optimization in High Dimension [J].
Akimoto, Youhei ;
Auger, Anne ;
Hansen, Nikolaus .
GECCO'14: PROCEEDINGS OF THE 2014 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2014, :373-380
[2]  
Audet C., 2017, Springer Series in Operations Research and Financial Engineering, DOI [10.1007/978-3-319-68913-5, DOI 10.1007/978-3-319-68913-5]
[3]  
Brockman G, 2016, Arxiv, DOI arXiv:1606.01540
[4]  
Chrabaszcz P., 2018, ARXIV
[5]  
Coumans E., 2016, PyBullet, a python module for physics simulation for games, robotics and machine learning
[6]  
Cuccu Giuseppe, 2012, Parallel Problem Solving from Nature - PPSN XII. Proceedings of the 12th International Conference, P488, DOI 10.1007/978-3-642-32964-7_49
[7]  
Cuccu G, 2019, AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P998
[8]   Mapreduce: Simplified data processing on large clusters [J].
Dean, Jeffrey ;
Ghemawat, Sanjay .
COMMUNICATIONS OF THE ACM, 2008, 51 (01) :107-113
[9]  
Elhara O, 2019, Arxiv, DOI arXiv:1903.06396
[10]  
Fort Stanislav, 2019, Advances in Neural Information Processing Systems