Knowledge-Aware Federated Active Learning with Non-IID Data

被引:12
作者
Cao, Yu-Tong [1 ]
Shi, Ye [2 ]
Yu, Baosheng [1 ]
Wang, Jingya [2 ]
Tao, Dacheng [1 ]
机构
[1] Univ Sydney, Sydney AI Ctr, Sch Comp Sci, Sydney, NSW, Australia
[2] ShanghaiTech Univ, Shanghai, Peoples R China
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023) | 2023年
关键词
D O I
10.1109/ICCV51070.2023.02036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning enables multiple decentralized clients to learn collaboratively without sharing local data. However, the expensive annotation cost on local clients remains an obstacle in utilizing local data. In this paper, we propose a federated active learning paradigm to efficiently learn a global model with a limited annotation budget while protecting data privacy in a decentralized learning manner. The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the asynchronous local clients. This becomes even more significant when data is distributed non-IID across local clients. To address the aforementioned challenge, we propose Knowledge-Aware Federated Active Learning ( KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU). Specifically, KSAS is a novel active sampling method tailored for the federated active learning problem, aiming to deal with the mismatch challenge by sampling actively based on the discrepancies between local and global models. KSAS intensifies specialized knowledge in local clients, ensuring the sampled data is informative for both the local clients and the global model. Meanwhile, KCFU deals with the client heterogeneity caused by limited data and non-IID data distributions by compensating for each client's ability in weak classes with the assistance of the global model. Extensive experiments and analyses are conducted to show the superiority of KAFAL over recent state-of-the-art active learning methods. Code is available at https://github.com/ycao5602/KAFAL.
引用
收藏
页码:22222 / 22232
页数:11
相关论文
共 52 条
[1]  
Ahn Jin-Hyun, 2022, ARXIV220200195
[2]  
Ash J. T., 2020, ICLR
[3]   The power of ensembles for active learning in image classification [J].
Beluch, William H. ;
Genewein, Tim ;
Nuernberger, Andreas ;
Koehler, Jan M. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9368-9377
[4]  
Boski M, 2017, 2017 10TH INTERNATIONAL WORKSHOP ON MULTIDIMENSIONAL (ND) SYSTEMS (NDS)
[5]  
Chandra, 2018, arXiv preprint arXiv:1806.00582
[6]  
Chen H.Y., 2021, INT C LEARN REPR
[7]  
Chen X, 2020, AAAI CONF ARTIF INTE, V34, P3537
[8]  
Cortes C., 2019, ICML
[9]  
Dagan I., 1995, PROC INT C MACHINE L, P150, DOI [DOI 10.1016/B978-1-55860-377-6.50027-X, 10.1016/B978-1-55860-377-6.50027-X]
[10]  
Ebrahimi Sayna, 2020, ARXIV201210467