FedGKD: Federated Graph Knowledge Distillation for privacy-preserving rumor detection

被引:0
作者
Zheng, Peng [1 ]
Dou, Yong [1 ]
Yan, Yeqing [1 ]
机构
[1] Natl Univ Def Technol, Sch Comp Sci, Changsha, Peoples R China
基金
中国国家自然科学基金;
关键词
Rumor detection; Federated learning; Privacy-preserving; Knowledge distillation;
D O I
10.1016/j.knosys.2024.112476
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The massive spread of rumors on social networks has caused serious adverse effects on individuals and society, increasing the urgency of rumor detection. Existing detection methods based on deep learning have achieved fruitful results by virtue of their powerful semantic representation capabilities. However, the centralized training mode and the reliance on extensive training data containing user privacy pose significant risks of privacy abuse or leakage. Although federated learning with client-level differential privacy provides a potential solution, it results in a dramatic decline in model performance. To address these issues, we propose a Federated Graph Knowledge Distillation framework (FedGKD), which aims to effectively identify rumors while preserving user privacy. In this framework, we implement anonymization from both the feature and structure dimensions of graphs, and apply differential privacy only to sensitive features to prevent significant deviation in data statistics. Additionally, to improve model generalization performance in federated settings, we learn a lightweight generator at the server to extract global knowledge through knowledge distillation. This knowledge is then broadcast to clients as inductive experience to regulate their local training. Extensive experiments on four publicly available datasets demonstrate that FedGKD outperforms strong baselines and displays outstanding privacy-preserving capabilities.
引用
收藏
页数:11
相关论文
共 52 条
  • [11] Hinton G, 2015, Arxiv, DOI [arXiv:1503.02531, DOI 10.48550/ARXIV.1503.02531]
  • [12] Hu LM, 2021, 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), P754
  • [13] Karimireddy SP, 2020, PR MACH LEARN RES, V119
  • [14] Khoo LMS, 2020, AAAI CONF ARTIF INTE, V34, P8783
  • [15] Kumar S, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P5047
  • [16] Rumor Detection with Field of Linear and Non-Linear Propagation
    Lao, An
    Shi, Chongyang
    Yang, Yayi
    [J]. PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 3178 - 3187
  • [17] Li P., 2024, Adv. Neural Inf. Process. Syst, V36
  • [18] Entity-Oriented Multi-Modal Alignment and Fusion Network for Fake News Detection
    Li, Peiguang
    Sun, Xian
    Yu, Hongfeng
    Tian, Yu
    Yao, Fanglong
    Xu, Guangluan
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 24 : 3455 - 3468
  • [19] Model-Contrastive Federated Learning
    Li, Qinbin
    He, Bingsheng
    Song, Dawn
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10708 - 10717
  • [20] Dual emotion based fake news detection: A deep attention-weight update approach
    Luvembe, Alex Munyole
    Li, Weimin
    Li, Shaohua
    Liu, Fangfang
    Xu, Guiqiong
    [J]. INTERNATIONAL JOURNAL OF ORTHOPAEDIC AND TRAUMA NURSING, 2023, 60 (04)