DCIGAN: A Distributed Class-Incremental Learning Method Based on Generative Adversarial Networks

被引:5
作者
Guan, Hongtao [1 ]
Wang, Yijie [1 ]
Ma, Xingkong [1 ]
Li, Yongmou [1 ]
机构
[1] Natl Univ Def Technol, Coll Comp, Sci & Technol Parallel & Distributed Lab, Changsha, Hunan, Peoples R China
来源
2019 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2019) | 2019年
基金
中国国家自然科学基金; 国家教育部科学基金资助;
关键词
class-incremental learning; distributed learning; generative adversarial networks; data isolated islands; CLASSIFIERS; MACHINES; ENSEMBLE;
D O I
10.1109/ISPA-BDCloud-SustainCom-SocialCom48970.2019.00115
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Class incremental learning has received wide attention due to its better adaptability to the changing characteristics of online learning. However, data cannot be shared between organizations in the data isolated islands scenario. Existing solutions cannot adapt to incremental classes without aggregating data. In this paper, we propose a distributed class-incremental learning framework, called DCIGAN. It uses GAN generators to store the information of past data and continuously update GAN parameters with new data. In particular, we proposed CIGAN to ensure that the distribution of generated pseudo data is as close as possible to the real data on a single node, which guarantees the accuracy of class-incremental learning. Furthermore, we propose GF, a generators fusion method to integrate local generators of multi-nodes into a new global generator. To evaluate the performance of DCIGAN, we conduct experiments on six datasets under various parameter settings on both two and multi nodes distributed scenarios. Extensive experiments confirm that DCIGAN outperform the general baselines and achieve classification accuracy closes to the method of data aggregating.
引用
收藏
页码:768 / 775
页数:8
相关论文
共 38 条
[1]  
[Anonymous], 2017, NIPS
[2]  
[Anonymous], 2014, ARXIV PREPRINT ARXIV
[3]  
Arjovsky M., 2017, P 34 INT C MACH LEAR, V70, P214
[4]  
Calandra Roberto, 2012, Artificial Neural Networks and Machine Learning - ICANN 2012. 22nd International Conference on Artificial Neural Networks, P379, DOI 10.1007/978-3-642-33266-1_47
[5]   Training Support Vector Machines with privacy-protected data [J].
Gonzalez-Serrano, Francisco-Javier ;
Navia-Vazquez, Angel ;
Amor-Martin, Adrian .
PATTERN RECOGNITION, 2017, 72 :93-107
[6]  
Goodfellow I., 2014, P NEURIPS, P2672
[7]  
Gulrajani I., 2017, P ADV NEUR INF PROC, P5767
[8]  
Ho Qirong, 2013, Adv Neural Inf Process Syst, V2013, P1223
[9]   How Generative Adversarial Networks and Their Variants Work: An Overview [J].
Hong, Yongjun ;
Hwang, Uiwon ;
Yoo, Jaeyoon ;
Yoon, Sungroh .
ACM COMPUTING SURVEYS, 2019, 52 (01)
[10]  
Jagannathan G., 2005, Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, P593, DOI DOI 10.1145/1081870.1081942