Distributed Memory Approximate Message Passing

被引:0
|
作者
Lu, Jun [1 ]
Liu, Lei [1 ]
Huang, Shunqi [2 ]
Wei, Ning [3 ,4 ]
Chen, Xiaoming [1 ]
机构
[1] Zhejiang Univ, Coll Informat Sci & Elect Engn, Zhejiang Prov Key Lab Informat Proc Commun & Netwo, Hangzhou 310007, Peoples R China
[2] Japan Adv Inst Sci & Technol, Sch Informat Sci, Nomi 9231292, Japan
[3] ZTE Corp, Shenzhen 518055, Peoples R China
[4] State Key Lab Mobile Network & Mobile Multimedia T, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Vectors; Transforms; Maximum likelihood estimation; Costs; Bayes methods; Message passing; Matrix converters; Consensus propagation; distributed information processing; memory approximate message passing; DYNAMICS;
D O I
10.1109/LSP.2024.3460478
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Approximate message passing (AMP) algorithms are iterative methods for signal recovery in noisy linear systems. In some scenarios, AMP algorithms need to operate within a distributed network. To address this challenge, the distributed extensions of AMP (D-AMP, FD-AMP) and orthogonal/vector AMP (D-OAMP/D-VAMP) were proposed, but they still inherit the limitations of centralized algorithms. In this letter, we propose distributed memory AMP (D-MAMP) to overcome the IID matrix limitation of D-AMP/FD-AMP, as well as the high complexity and heavy communication cost of D-OAMP/D-VAMP. We introduce a matrix-by-vector variant of MAMP tailored for distributed computing. Leveraging this variant, D-MAMP enables each node to execute computations utilizing locally available observation vectors and transform matrices. Meanwhile, global summations of locally updated results are conducted through message interaction among nodes. For acyclic graphs, D-MAMP converges to the same mean square error performance as the centralized MAMP.
引用
收藏
页码:2660 / 2664
页数:5
相关论文
共 50 条
  • [41] Dynamics of Damped Approximate Message Passing Algorithms
    Mimura, Kazushi
    Takeuchi, Jun'ichi
    2019 IEEE INFORMATION THEORY WORKSHOP (ITW), 2019, : 564 - 568
  • [42] An Expectation Propagation Perspective on Approximate Message Passing
    Meng, Xiangming
    Wu, Sheng
    Kuang, Linling
    Lu, Jianhua
    IEEE SIGNAL PROCESSING LETTERS, 2015, 22 (08) : 1194 - 1197
  • [43] On the Convergence of Approximate Message Passing with Arbitrary Matrices
    Rangan, Sundeep
    Schniter, Philip
    Fletcher, Alyson
    2014 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2014, : 236 - 240
  • [44] Parametric Bilinear Generalized Approximate Message Passing
    Parker, Jason T.
    Schniter, Philip
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2016, 10 (04) : 795 - 808
  • [45] Message Passing in Distributed Wireless Networks
    Aggarwal, Vaneet
    Liu, Youjian
    Sabharwal, Ashutosh
    2009 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, VOLS 1- 4, 2009, : 1090 - +
  • [46] MESSAGE PASSING WITHOUT MEMORY COPY
    Yaikhom, Gagarine
    PARALLEL PROCESSING LETTERS, 2008, 18 (01) : 87 - 100
  • [47] The parallel 'Deutschland-Modell ' - A message-passing version for distributed memory computers
    Schattler, U
    Krenzien, E
    PARALLEL COMPUTING, 1997, 23 (14) : 2215 - 2226
  • [48] Sparse or Dense - Message Passing (MP) or Approximate Message Passing (AMP) for Compressed Sensing Signal Recovery
    Mahmood, Asad
    Kang, Jaewook, Jr.
    Lee, HeungNo
    2013 IEEE PACIFIC RIM CONFERENCE ON COMMUNICATIONS, COMPUTERS AND SIGNAL PROCESSING (PACRIM), 2013, : 259 - 264
  • [49] Scalable CFD computations using message-passing and distributed shared memory algorithms
    Plazek, J
    Banas, K
    Kitowski, J
    RECENT ADVANCES IN PARALLEL VIRTUAL MACHINE AND MESSAGE PASSING INTERFACE, PROCEEDINGS, 2000, 1908 : 282 - 288
  • [50] THE DESIGN OF A STANDARD MESSAGE-PASSING INTERFACE FOR DISTRIBUTED-MEMORY CONCURRENT COMPUTERS
    WALKER, DW
    PARALLEL COMPUTING, 1994, 20 (04) : 657 - 673