Joint Learning of Neural Transfer and Architecture Adaptation for Image Recognition

被引:10
作者
Wang, Guangrun [1 ]
Lin, Liang [1 ,2 ]
Chen, Rongcong [1 ]
Wang, Guangcong [1 ]
Zhang, Jiqi [1 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510275, Peoples R China
[2] DarkMatter AI Res, Guangzhou 511400, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep neural networks; image recognition; neural architecture adaptation; structured learning; weight pretraining and finetuning (WP&F);
D O I
10.1109/TNNLS.2021.3070605
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current state-of-the-art visual recognition systems usually rely on the following pipeline: 1) pretraining a neural network on a large-scale data set (e.g., ImageNet) and 2) finetuning the network weights on a smaller, task-specific data set. Such a pipeline assumes that the sole weight adaptation is able to transfer the network capability from one domain to another domain based on a strong assumption that a fixed architecture is appropriate for all domains. However, each domain with a distinct recognition target may need different levels/paths of feature hierarchy, where some neurons may become redundant, and some others are reactivated to form new network structures. In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness, compared to the existing image recognition pipeline that only tunes the weights regardless of the architecture. Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks. This further improves the search efficiency of our method. Moreover, we also provide principled and empirical analysis to explain why our approach works by investigating the ineffectiveness of existing neural architecture search. We find that preserving the joint distribution of the network architecture and weights is of importance. This analysis not only benefits image recognition but also provides insights for crafting neural networks. Experiments on five representative image recognition tasks, such as person re-identification, age estimation, gender recognition, image classification, and unsupervised domain adaptation, demonstrate the effectiveness of our method.
引用
收藏
页码:5401 / 5415
页数:15
相关论文
共 92 条
[1]  
[Anonymous], 2017, ARXIV171200559
[2]  
Bai S, 2017, AAAI CONF ARTIF INTE, P1281
[3]  
Baker B., 2017, INT C LEARNING REPRE, P1
[4]  
Brock Andrew, 2018, INT C LEARN REPR
[5]  
Cai H., 2020, P INT C LEARN REPR
[6]  
Cai Han, 2019, INT C LEARN REPR ICL
[7]   Mixed High-Order Attention Network for Person Re-Identification [J].
Chen, Binghui ;
Deng, Weihong ;
Hu, Jiani .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :371-381
[8]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[9]  
Chen T, 2020, PR MACH LEARN RES, V119
[10]  
Chen Xinyun, 2020, arXiv preprint arXiv:2003.04297