Utility distribution matters: enabling fast belief propagation for multi-agent optimization with dense local utility function

被引:2
作者
Deng, Yanchen [1 ]
An, Bo [1 ]
机构
[1] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
基金
新加坡国家研究基金会;
关键词
DCOP; Inference; Belief propagation; Max-sum; Domain pruning; CONSTRAINT OPTIMIZATION; ALGORITHM; SEARCH; BREAKOUT; GRAPHS; TREES; ADOPT;
D O I
10.1007/s10458-021-09511-z
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Belief propagation algorithms including Max-sum and its variants are important methods for multi-agent optimization. However, they face a significant scalability challenge as the computational overhead grows exponentially with respect to the arity of each utility function. To date, a number of acceleration algorithms for belief propagation algorithms were proposed. These algorithms maintain a lower bound on total utility and employ either a domain pruning technique or branch and bound to reduce the search space. However, these algorithms still suffer from low-quality bounds and the inability of filtering out suboptimal tied entries. In this paper, we first show that these issues are exacerbated and can considerably degenerate the performance of the state-of-the-art methods when dealing with the problems with dense utility functions, which widely exist in many real-world domains. Built on this observation, we then develop several novel acceleration algorithms that alleviate the effect of densely distributed local utility values from the perspectives of both bound quality and search space organization. Specifically, we build a search tree for each distinct local utility value to enable efficient branch and bound on tied entries and tighten a running lower bound to perform dynamic domain pruning. That is, we integrate both search and pruning to iteratively reduce the search space. Besides, we propose a discretization mechanism to offer a tradeoff between the reconstruction overhead and the pruning efficiency. Finally, a K-depth partial tree-sorting scheme with different sorting criteria is proposed to reduce the memory consumption. We demonstrate the superiorities of our algorithms over the state-of-the-art acceleration algorithms from both theoretical and experimental perspectives.
引用
收藏
页数:40
相关论文
共 56 条
[1]   The generalized distributive law [J].
Aji, SM ;
McEliece, RJ .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2000, 46 (02) :325-343
[2]  
[Anonymous], 2011, AAAI
[3]  
Atlas J, 2008, P 10 INT WORKSH DIST, P37
[4]  
Chen DD, 2020, AAAI CONF ARTIF INTE, V34, P7087
[5]  
Chen ZY, 2019, AAAI CONF ARTIF INTE, P6038
[6]   A class of iterative refined Max-sum algorithms via non-consecutive value propagation strategies [J].
Chen, Ziyu ;
Deng, Yanchen ;
Wu, Tengfei ;
He, Zhongshi .
AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2018, 32 (06) :822-860
[7]   Governing convergence of Max-sum on DCOPs through damping and splitting [J].
Cohen, Liel ;
Galiki, Rotem ;
Zivan, Roie .
ARTIFICIAL INTELLIGENCE, 2020, 279
[8]   AND/OR search spaces for graphical models [J].
Dechter, Rina ;
Mateescu, Robert .
ARTIFICIAL INTELLIGENCE, 2007, 171 (2-3) :73-106
[9]  
Deng YC, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P31
[10]   Distributed Gibbs: A Linear-Space Sampling-Based DCOP Algorithm [J].
Duc Thien Nguyen ;
Yeoh, William ;
Lau, Hoong Chuin ;
Zivan, Roie .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2019, 64 :705-748