Squeezing More Utility via Adaptive Clipping on Differentially Private Gradients in Federated Meta-Learning

被引:3
作者
Wang, Ning [1 ]
Xiao, Yang [2 ]
Chen, Yimin [3 ]
Zhang, Ning [4 ]
Lou, Wenjing [1 ]
Hou, Y. Thomas [1 ]
机构
[1] Virginia Tech, Blacksburg, VA 24061 USA
[2] Univ Kentucky, Lexington, KY USA
[3] Univ Massachusetts Lowell, Lowell, MA USA
[4] Washington Univ St Louis, St Louis, MO USA
来源
PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022 | 2022年
基金
美国国家科学基金会;
关键词
differential privacy; federated meta-learning; adaptive clipping; privacy utility trade-off;
D O I
10.1145/3564625.3564652
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated meta-learning has emerged as a promising AI framework for today's mobile computing scenes involving distributed clients. It enables collaborative model training using the data located at distributed mobile clients and accommodates clients that need fast model customization with limited new data. However, federated meta-learning solutions are susceptible to inference-based privacy attacks since the global model encoded with clients' training data is open to all clients and the central server. Meanwhile, differential privacy (DP) has been widely used as a countermeasure against privacy inference attacks in federated learning. The adoption of DP in federated meta-learning is complicated by the model accuracy-privacy trade-off and the model hierarchy attributed to the meta-learning component. In this paper, we introduce DP-FedMeta, a new differentially private federated meta-learning architecture that addresses such data privacy challenges. DP-FedMeta features an adaptive gradient clipping method and a one-pass meta-training process to improve the model utility-privacy trade-off. At the core of DPFedMeta are two DP mechanisms, namely DP-AGR and DP-AGRLR, to provide two notions of privacy protection for the hierarchical models. Extensive experiments in an emulated federated metalearning scenario on well-known datasets (Omniglot, CIFAR-FS, and Mini-ImageNet) demonstrate that DP-FedMeta accomplishes better privacy protection while maintaining comparable model accuracy compared to the state-of-the-art solution that directly applies DP-based meta-learning to the federated setting.
引用
收藏
页码:647 / 657
页数:11
相关论文
共 32 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Antoniou A., 2019, INT C LEARN REPR, DOI DOI 10.1145/3351556.3351574
[3]  
Bertinetto L., 2019, INT C LEARN REPR
[4]  
Chen F, 2019, Arxiv, DOI [arXiv:1802.07876, 10.48550/arXiv.1802.07876]
[5]  
Donahue J, 2014, PR MACH LEARN RES, V32
[6]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[7]  
Fallah A, 2020, ADV NEUR IN, V33
[8]  
Finn C, 2017, PR MACH LEARN RES, V70
[9]  
Geyer Robin C., DIFFERENTIALLY PRIVA, DOI 10.48550/arXiv.1712.07557
[10]  
Kairouz Peter., 2019, ADV OPEN PROBLEMS FE, DOI DOI 10.48550/ARXIV.1912.04977