S2AC: Self-Supervised Attention Correlation Alignment Based on Mahalanobis Distance for Image Recognition

被引:3
作者
Wang, Zhi-Yong [1 ]
Kang, Dae-Ki [2 ]
Zhang, Cui-Ping [1 ]
机构
[1] Weifang Univ Sci & Technol, Blockchain Lab Agr Vegetables, Weifang 262700, Peoples R China
[2] Dongseo Univ, Dept Comp Engn, 47 Jurye Ro, Busan 47011, South Korea
关键词
domain adaptation; CORAL; self-supervised learning;
D O I
10.3390/electronics12214419
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Susceptibility to domain changes for image classification hinders the application and development of deep neural networks. Domain adaptation (DA) makes use of domain-invariant characteristics to improve the performance of a model trained on labeled data from one domain (source domain) on an unlabeled domain (target) with a different data distribution. But existing DA methods simply use pretrained models (e.g., AlexNet, ResNet) for feature extraction, which are convolutional models that are trapped in localized features and fail to acquire long-distance dependencies. Furthermore, many approaches depend too much on pseudo-labels, which can impair adaptation efficiency and lead to unstable and inconsistent results. In this research, we present S(2)AC, a novel approach for unsupervised deep domain adaptation, that makes use of a stacked attention architecture as a feature map extractor. Our method can fuse domain discrepancy with minimizing a linear transformation of the second statistics (covariances) extended by the p-norm, while simultaneously designing pretext tasks on heuristics to improve the generality of the learning representation. In addition, we have developed a new trainable relative position embedding that not only reduces the model parameters but also enhances model accuracy and expedites the training process. To illustrate our method's efficacy and controllability, we designed extensive experiments based on the Office31, Office_Caltech_10, and OfficeHome datasets. To the best of our knowledge, the proposed method is the first attempt at incorporating attention-based networks and self-supervised learning for image domain adaptation, and has shown promising results.
引用
收藏
页数:19
相关论文
共 31 条
  • [1] Integrating structured biological data by Kernel Maximum Mean Discrepancy
    Borgwardt, Karsten M.
    Gretton, Arthur
    Rasch, Malte J.
    Kriegel, Hans-Peter
    Schoelkopf, Bernhard
    Smola, Alex J.
    [J]. BIOINFORMATICS, 2006, 22 (14) : E49 - E57
  • [2] Csurka G, 2017, ADV COMPUT VIS PATT, P1, DOI 10.1007/978-3-319-58347-1_1
  • [3] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [4] Diamant I., 2022, Reconciling a centroid-hypothesis conflict in source-free domain adaptation
  • [5] Unsupervised Visual Representation Learning by Context Prediction
    Doersch, Carl
    Gupta, Abhinav
    Efros, Alexei A.
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1422 - 1430
  • [6] Ganin Y, 2016, J MACH LEARN RES, V17
  • [7] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [8] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [9] Long M, 2016, PROCEEDINGS OF SYMPOSIUM OF POLICING DIPLOMACY AND THE BELT & ROAD INITIATIVE, 2016, P136
  • [10] Long MS, 2018, ADV NEUR IN, V31