Communicational and Computational Efficient Federated Domain Adaptation

被引:5
作者
Kang, Hua [1 ]
Li, Zhiyang [2 ]
Zhang, Qian [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[2] ByteDance Inc, Hangzhou 310030, Zhejiang, Peoples R China
关键词
Feature extraction; Training; Adaptation models; Computational modeling; Data models; Optimization methods; Transfer learning; Federated learning; domain adaptation; communicational efficient;
D O I
10.1109/TPDS.2022.3167457
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The emerging paradigm of Federated Learning enables mobile users to collaboratively train a model without disclosing their privacy-sensitive data. Nevertheless, data collected from different mobile users may not be independent and identically distributed. Thus directly applying the trained model to a new mobile user usually leads to performance degradation due to the so-called domain shift. Unsupervised Domain Adaptation is an effective technique to mitigate domain shift and transfer knowledge from labeled source domains to the unlabeled target domain. In this article, we design a Federated Domain Adaptation framework that extends Domain Adaptation with the constraints of Federated Learning to train a model for the target domain and preserve the data privacy of all the source and target domains. As mobile devices usually have limited computation and communication capabilities, we design a set of optimization methods that significantly enhance our framework's computation and communication efficiency, making it more friendly to resource-constrained edge devices. Evaluation results on three datasets show that our framework has comparable performance with the standard centralized training approach, and the optimization methods can reduce the computation and communication overheads by up to two orders of magnitude.
引用
收藏
页码:3678 / 3689
页数:12
相关论文
共 38 条
  • [1] Leveraging active learning and conditional mutual information to minimize data annotation in human activity recognition
    Adaimi, Rebecca
    Thomaz, Edison
    [J]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2019, 3 (03)
  • [2] [Anonymous], 2016, WORKSH PRIV MULT MAC
  • [3] [Anonymous], 2018, CORR
  • [4] Practical Secure Aggregation for Privacy-Preserving Machine Learning
    Bonawitz, Keith
    Ivanov, Vladimir
    Kreuter, Ben
    Marcedone, Antonio
    McMahan, H. Brendan
    Patel, Sarvar
    Ramage, Daniel
    Segal, Aaron
    Seth, Karn
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 1175 - 1191
  • [5] Briggs C., 2020, IEEE IJCNN, P1, DOI DOI 10.1109/IJCNN48605.2020.9207469
  • [6] Chen Fei, 2018, ABS180207876 CORR
  • [7] Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
    Fredrikson, Matt
    Jha, Somesh
    Ristenpart, Thomas
    [J]. CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1322 - 1333
  • [8] Fredrikson M, 2014, PROCEEDINGS OF THE 23RD USENIX SECURITY SYMPOSIUM, P17
  • [9] Ghifary M, 2014, LECT NOTES ARTIF INT, V8862, P898, DOI 10.1007/978-3-319-13560-1_76
  • [10] Gong BQ, 2012, PROC CVPR IEEE, P2066, DOI 10.1109/CVPR.2012.6247911