Decentralized Statistical Inference with Unrolled Graph Neural Networks

被引:1
|
作者
Wang, He [1 ,2 ,3 ]
Shen, Yifei [4 ]
Wang, Ziyuan [1 ]
Li, Dongsheng [5 ]
Zhang, Jun [6 ]
Letaief, Khaled B. [4 ]
Lu, Jie [1 ]
机构
[1] ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Chinese Acad Sci, Shanghai Inst Microsyst & Informat Technol, Shanghai 200050, Peoples R China
[4] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[5] Microsoft Res Asia, Shanghai, Peoples R China
[6] Hong Kong Polytech Univ, Dept Elect & Informat Engn, Hong Kong, Peoples R China
来源
2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC) | 2021年
基金
中国国家自然科学基金;
关键词
Decentralized optimization; graph neural networks; algorithm unrolling; interpretable deep learning; PROXIMAL GRADIENT ALGORITHM;
D O I
10.1109/CDC45484.2021.9682857
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we investigate the decentralized statistical inference problem, where a network of agents cooperatively recover a (structured) vector from private noisy samples without centralized coordination. Existing optimization-based algorithms suffer from issues of model mismatches and poor convergence speed, and thus their performance would be degraded provided that the number of communication rounds is limited. This motivates us to propose a learning-based framework, which unrolls well-noted decentralized optimization algorithms (e.g., Prox-DGD and PG-EXTRA) into graph neural networks (GNNs). By minimizing the recovery error via end-to-end training, this learning-based framework resolves the model mismatch issue. Our convergence analysis (with PG-EXTRA as the base algorithm) reveals that the learned model parameters may accelerate the convergence and reduce the recovery error to a large extent. The simulation results demonstrate that the proposed GNN-based learning methods prominently outperform several state-of-the-art optimization-based algorithms in convergence speed and recovery error.
引用
收藏
页码:2634 / 2640
页数:7
相关论文
共 50 条
  • [31] Graph Neural Networks for Decentralized Multi-Robot Target Tracking
    Zhou, Lifeng
    Sharma, Vishnu D.
    Li, Qingbiao
    Prorok, Amanda
    Ribeiro, Alejandro
    Tokekar, Pratap
    Kumar, Vijay
    2022 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2022, : 195 - 202
  • [32] GROWS - Improving Decentralized Resource Allocation in Wireless Networks through Graph Neural Networks
    Randall, Martin
    Belzarena, Pablo
    Larroca, Federico
    Casas, Pedro
    PROCEEDINGS OF THE 1ST INTERNATIONAL WORKSHOP ON GRAPH NEURAL NETWORKING, GNNET 2022, 2022, : 24 - 29
  • [33] Neural networks and statistical inference: seeking robust and efficient learning
    Capobianco, E
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2000, 32 (3-4) : 443 - 454
  • [34] Defense against membership inference attack in graph neural networks through graph perturbation
    Kai Wang
    Jinxia Wu
    Tianqing Zhu
    Wei Ren
    Ying Hong
    International Journal of Information Security, 2023, 22 : 497 - 509
  • [35] Defense against membership inference attack in graph neural networks through graph perturbation
    Wang, Kai
    Wu, Jinxia
    Zhu, Tianqing
    Ren, Wei
    Hong, Ying
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2023, 22 (02) : 497 - 509
  • [36] A comparison of statistical relational learning and graph neural networks for aggregate graph queries
    Embar, Varun
    Srinivasan, Sriram
    Getoor, Lise
    MACHINE LEARNING, 2021, 110 (07) : 1847 - 1866
  • [37] A comparison of statistical relational learning and graph neural networks for aggregate graph queries
    Varun Embar
    Sriram Srinivasan
    Lise Getoor
    Machine Learning, 2021, 110 : 1847 - 1866
  • [38] Efficient Inference of Graph Neural Networks Using Local Sensitive Hash
    Liu, Tao
    Li, Peng
    Su, Zhou
    Dong, Mianxiong
    IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2024, 9 (03): : 548 - 558
  • [39] TYGR: Type Inference on Stripped Binaries using Graph Neural Networks
    Zhu, Chang
    Li, Ziyang
    Xue, Anton
    Bajaj, Ati Priya
    Gibbs, Wil
    Liu, Yibo
    Alur, Rajeev
    Bao, Tiffany
    Dai, Hanjun
    Doupe, Adam
    Naik, Mayur
    Shoshitaishvili, Yan
    Wang, Ruoyu
    Machiry, Aravind
    PROCEEDINGS OF THE 33RD USENIX SECURITY SYMPOSIUM, SECURITY 2024, 2024, : 4283 - 4300
  • [40] Topology modification against membership inference attack in Graph Neural Networks
    Guan, Faqian
    Zhu, Tianqing
    Tong, Hanjin
    Zhou, Wanlei
    KNOWLEDGE-BASED SYSTEMS, 2024, 305