FedGKD: Federated Graph Knowledge Distillation for privacy-preserving rumor detection

被引:0
作者
Zheng, Peng [1 ]
Dou, Yong [1 ]
Yan, Yeqing [1 ]
机构
[1] Natl Univ Def Technol, Sch Comp Sci, Changsha, Peoples R China
基金
中国国家自然科学基金;
关键词
Rumor detection; Federated learning; Privacy-preserving; Knowledge distillation;
D O I
10.1016/j.knosys.2024.112476
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The massive spread of rumors on social networks has caused serious adverse effects on individuals and society, increasing the urgency of rumor detection. Existing detection methods based on deep learning have achieved fruitful results by virtue of their powerful semantic representation capabilities. However, the centralized training mode and the reliance on extensive training data containing user privacy pose significant risks of privacy abuse or leakage. Although federated learning with client-level differential privacy provides a potential solution, it results in a dramatic decline in model performance. To address these issues, we propose a Federated Graph Knowledge Distillation framework (FedGKD), which aims to effectively identify rumors while preserving user privacy. In this framework, we implement anonymization from both the feature and structure dimensions of graphs, and apply differential privacy only to sensitive features to prevent significant deviation in data statistics. Additionally, to improve model generalization performance in federated settings, we learn a lightweight generator at the server to extract global knowledge through knowledge distillation. This knowledge is then broadcast to clients as inductive experience to regulate their local training. Extensive experiments on four publicly available datasets demonstrate that FedGKD outperforms strong baselines and displays outstanding privacy-preserving capabilities.
引用
收藏
页数:11
相关论文
共 52 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Bian T, 2020, AAAI CONF ARTIF INTE, V34, P549
  • [3] Survey on Privacy-Preserving Techniques for Microdata Publication
    Carvalho, Tania
    Moniz, Nuno
    Faria, Pedro
    Antunes, Luis
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (14S)
  • [4] Multi-view learning with distinguishable feature fusion for rumor detection
    Chen, Xueqin
    Zhou, Fan
    Trajcevski, Goce
    Bonsangue, Marcello
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 240
  • [5] Differentially Private Federated Learning with Local Regularization and Sparsification
    Cheng, Anda
    Wang, Peisong
    Zhang, Xi Sheryl
    Cheng, Jian
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10112 - 10121
  • [6] Causal diffused graph-transformer network with stacked early classification loss for efficient stream classification of rumours
    Cheung, Tsun-Hin
    Lam, Kin-Man
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 277
  • [7] Unsupervised Rumor Detection Based on Propagation Tree VAE
    Fang, Lanting
    Feng, Kaiyu
    Zhao, Kaiqi
    Hu, Aiqun
    Li, Tao
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (10) : 10309 - 10323
  • [8] Geyer R. C., 2017, arXiv
  • [9] Knowledge Distillation: A Survey
    Gou, Jianping
    Yu, Baosheng
    Maybank, Stephen J.
    Tao, Dacheng
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (06) : 1789 - 1819
  • [10] He Ruifei, 2022, IEEE CVF COMP VIS PA, P9161