Trading Trust for Privacy: Socially-Motivated Personalized Privacy-Preserving Collaborative Learning in IoT

被引:0
作者
Chen, Yuliang [1 ,2 ]
Lin, Xi [1 ,2 ]
Li, Gaolei [1 ,2 ]
Chen, Lixing [1 ,2 ]
Wang, Jing [1 ,2 ]
Liao, Siyi [3 ]
Li, Jianhua [1 ,2 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Shanghai Key Lab Integrated Adm Technol Informat, Shanghai, Peoples R China
[3] China United Network Commun Co Ltd, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024 | 2024年
基金
中国国家自然科学基金;
关键词
Federated learning; differential privacy; social network; Internet of Things;
D O I
10.1109/CSCWD61410.2024.10580732
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Nowadays, collaborative federated learning (CFL) is developing rapidly in the Internet of Things (IoT), which allows clients to jointly train models without compromising private data. The existing research has studied alone either trust enhancement or privacy preservation issues in CFL. Due to the highly coupled nature of trust and privacy in a collaborative environment, it is worth investigating how to balance appropriate trust and privacy tradeoffs for realizing high-quality CFL. In this paper, we come up with the idea of "trading Trust for Privacy", and propose a novel socially-motivated personalized privacy-preserving federated learning (SP-PFL) framework, which aims to realize social trust-grained privacy protection. First, we design a social trust evaluation method among CFL clients, which is based on topological relation and attribute similarity. Based on the obtained trust value, we then propose a trust-grained privacy budget allocation strategy for SP-PFL, which could further adaptively adjust the differential privacy (DP) noise perturbation. Besides, we provide an analysis of privacy and convergence for our SP-PFL. Finally, we experiment with different models and parameter settings on different datasets. Extensive experimental results show that our method maintains personalized privacy and effectively improves the accuracy by 6.11% on the CNN model and MNIST dataset.
引用
收藏
页码:935 / 940
页数:6
相关论文
共 21 条
[1]   Knowledge sharing-based multi-block federated learning for few-shot oil layer identification [J].
Chen, Bingyang ;
Zeng, Xingjie ;
Zhang, Weishan ;
Fan, Lulu ;
Cao, Shaohua ;
Zhou, Jiehan .
ENERGY, 2023, 283
[2]   A Class-Imbalanced Heterogeneous Federated Learning Model for Detecting Icing on Wind Turbine Blades [J].
Cheng, Xu ;
Shi, Fan ;
Liu, Yongping ;
Zhou, Jiehan ;
Liu, Xiufeng ;
Huang, Lizhen .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (12) :8487-8497
[3]  
Cui Lei, 2018, 2018 IEEE INT C COMM, P1
[4]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[5]   Boosting and Differential Privacy [J].
Dwork, Cynthia ;
Rothblum, Guy N. ;
Vadhan, Salil .
2010 IEEE 51ST ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, 2010, :51-60
[6]  
Dwork Cynthia, 2021, IEEE COMMUNICATIONS, V23, P1622
[7]   Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning [J].
Hitaj, Briland ;
Ateniese, Giuseppe ;
Perez-Cruz, Fernando .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :603-618
[8]  
Jakub Konecny, 2016, ARXIV
[9]  
Kaltiokallio O, 2021, 2021 IEEE 24TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), P548
[10]  
Lyu L, 2020, arXiv