Federated unsupervised representation learning

被引:17
|
作者
Zhang, Fengda [1 ]
Kuang, Kun [1 ]
Chen, Long [1 ]
You, Zhaoyang [1 ]
Shen, Tao [1 ]
Xiao, Jun [1 ]
Zhang, Yin [1 ]
Wu, Chao [2 ]
Wu, Fei [1 ]
Zhuang, Yueting [1 ]
Li, Xiaolin [3 ,4 ,5 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Sch Publ Affairs, Hangzhou 310027, Peoples R China
[3] Tongdun Technol, Hangzhou 310000, Peoples R China
[4] Chinese Acad Sci, Inst Basic Med & Canc, Hangzhou 310018, Peoples R China
[5] Elast Mind AI Technol Inc, Hangzhou 310018, Peoples R China
基金
中国国家自然科学基金; 浙江省自然科学基金;
关键词
Federated learning; Unsupervised learning; Representation learning; Contrastive learning; TP183; ARTIFICIAL-INTELLIGENCE; ALGORITHM; KNOWLEDGE; BIG;
D O I
10.1631/FITEE.2200268
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To leverage the enormous amount of unlabeled data on distributed edge devices, we formulate a new problem in federated learning called federated unsupervised representation learning (FURL) to learn a common representation model without supervision while preserving data privacy. FURL poses two new challenges: (1) data distribution shift (non-independent and identically distributed, non-IID) among clients would make local models focus on different categories, leading to the inconsistency of representation spaces; (2) without unified information among the clients in FURL, the representations across clients would be misaligned. To address these challenges, we propose the federated contrastive averaging with dictionary and alignment (FedCA) algorithm. FedCA is composed of two key modules: a dictionary module to aggregate the representations of samples from each client which can be shared with all clients for consistency of representation space and an alignment module to align the representation of each client on a base model trained on public data. We adopt the contrastive approach for local model training. Through extensive experiments with three evaluation protocols in IID and non-IID settings, we demonstrate that FedCA outperforms all baselines with significant margins.
引用
收藏
页码:1181 / 1193
页数:13
相关论文
共 50 条
  • [31] Meta-HAR: Federated Representation Learning for Human Activity Recognition
    Li, Chenglin
    Niu, Di
    Jiang, Bei
    Zuo, Xiao
    Yang, Jianming
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 912 - 922
  • [32] FedSEMA: similarity-aware for representation consistency in federated contrastive learning
    Zhou, Yanbing
    Wu, Yingbo
    Zhou, Jiyang
    Zheng, Xin
    APPLIED INTELLIGENCE, 2024, 54 (01) : 301 - 316
  • [33] FedSEMA: similarity-aware for representation consistency in federated contrastive learning
    Yanbing Zhou
    Yingbo Wu
    Jiyang Zhou
    Xin Zheng
    Applied Intelligence, 2024, 54 : 301 - 316
  • [34] Personalized Federated Learning via Deviation Tracking Representation Learning
    Jang, Jaewon
    Choi, Bong Jun
    38TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN 2024, 2024, : 762 - 766
  • [35] Fed-UIQA: Federated Learning for Unsupervised Finger Vein Image Quality Assessment
    Liu, Xingli
    Guo, Jian
    Mu, Hengyu
    Gong, Lejun
    Han, Chong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT V, ICIC 2024, 2024, 14866 : 377 - 389
  • [36] Contrastive and Non-Contrastive Strategies for Federated Self-Supervised Representation Learning and Deep Clustering
    Miao, Runxuan
    Koyuncu, Erdem
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (06) : 1070 - 1084
  • [37] UPFL: Unsupervised Personalized Federated Learning towards New Clients
    Ye, Tiandi
    Chen, Cen
    Wang, Yinggui
    Li, Xiang
    Gao, Ming
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 851 - 859
  • [38] Federated Learning in Healthcare with Unsupervised and Semi-Supervised Methods
    Panos-Basterra, Juan
    Dolores Ruiz, M.
    Martin-Bautista, Maria J.
    FLEXIBLE QUERY ANSWERING SYSTEMS, FQAS 2023, 2023, 14113 : 182 - 193
  • [39] Unsupervised representation learning with Minimax distance measures
    Haghir Chehreghani, Morteza
    MACHINE LEARNING, 2020, 109 (11) : 2063 - 2097
  • [40] UNSUPERVISED REPRESENTATION LEARNING OF SPEECH FOR DIALECT IDENTIFICATION
    Shon, Suwon
    Hsu, Wei-Ning
    Glass, James
    2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018), 2018, : 105 - 111