Self-Supervised Learning Across Domains

被引:45
作者
Bucci, Silvia [1 ,2 ]
D'Innocente, Antonio [2 ,3 ]
Liao, Yujun [1 ]
Carlucci, Fabio Maria [4 ]
Caputo, Barbara [1 ,2 ]
Tommasi, Tatiana [1 ,2 ]
机构
[1] Politecn Torino, I-10129 Turin, Italy
[2] Italian Inst Technol, I-16132 Genoa, Italy
[3] Univ Rome Sapienza, I-00185 Rome, Italy
[4] Huawei Noahs Ark Labs, London N1C 4AG, England
基金
欧洲研究理事会;
关键词
Task analysis; Visualization; Indexes; Adaptation models; Data models; Training; Image recognition; Self-supervision; domain generalization; domain adaptation; multi-task learning;
D O I
10.1109/TPAMI.2021.3070791
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human adaptability relies crucially on learning and merging knowledge from both supervised and unsupervised tasks: the parents point out few important concepts, but then the children fill in the gaps on their own. This is particularly effective, because supervised learning can never be exhaustive and thus learning autonomously allows to discover invariances and regularities that help to generalize. In this paper we propose to apply a similar approach to the problem of object recognition across domains: our model learns the semantic labels in a supervised fashion, and broadens its understanding of the data by learning from self-supervised signals on the same images. This secondary task helps the network to focus on object shapes, learning concepts like spatial orientation and part correlation, while acting as a regularizer for the classification task over multiple visual domains. Extensive experiments confirm our intuition and show that our multi-task method, combining supervised and self-supervised knowledge, provides competitive results with respect to more complex domain generalization and adaptation solutions. It also proves its potential in the novel and challenging predictive and partial domain adaptation scenarios.
引用
收藏
页码:5516 / 5528
页数:13
相关论文
共 87 条
[1]  
Alliegro A, 2020, Arxiv, DOI arXiv:2004.07392
[2]  
[Anonymous], 1983, LEARNING CHILDREN PR
[3]  
Asano YM., 2020, INT C LEARN REPR ICL
[4]  
Balaji Y, 2018, ADV NEUR IN, V31
[5]   A theory of learning from different domains [J].
Ben-David, Shai ;
Blitzer, John ;
Crammer, Koby ;
Kulesza, Alex ;
Pereira, Fernando ;
Vaughan, Jennifer Wortman .
MACHINE LEARNING, 2010, 79 (1-2) :151-175
[6]  
Bousmalis K, 2016, ADV NEUR IN, V29
[7]   Tackling Partial Domain Adaptation with Self-supervision [J].
Bucci, Silvia ;
D'Innocente, Antonio ;
Tommasi, Tatiana .
IMAGE ANALYSIS AND PROCESSING - ICIAP 2019, PT II, 2019, 11752 :70-81
[8]   Partial Adversarial Domain Adaptation [J].
Cao, Zhangjie ;
Ma, Lijia ;
Long, Mingsheng ;
Wang, Jianmin .
COMPUTER VISION - ECCV 2018, PT VIII, 2018, 11212 :139-155
[9]   Learning to Transfer Examples for Partial Domain Adaptation [J].
Cao, Zhangjie ;
You, Kaichao ;
Long, Mingsheng ;
Wang, Jianmin ;
Yang, Qiang .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2980-2989
[10]   Partial Transfer Learning with Selective Adversarial Networks [J].
Cao, Zhangjie ;
Long, Mingsheng ;
Wang, Jianmin ;
Jordan, Michael I. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :2724-2732