S2AC: Self-Supervised Attention Correlation Alignment Based on Mahalanobis Distance for Image Recognition

被引:3
作者
Wang, Zhi-Yong [1 ]
Kang, Dae-Ki [2 ]
Zhang, Cui-Ping [1 ]
机构
[1] Weifang Univ Sci & Technol, Blockchain Lab Agr Vegetables, Weifang 262700, Peoples R China
[2] Dongseo Univ, Dept Comp Engn, 47 Jurye Ro, Busan 47011, South Korea
关键词
domain adaptation; CORAL; self-supervised learning;
D O I
10.3390/electronics12214419
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Susceptibility to domain changes for image classification hinders the application and development of deep neural networks. Domain adaptation (DA) makes use of domain-invariant characteristics to improve the performance of a model trained on labeled data from one domain (source domain) on an unlabeled domain (target) with a different data distribution. But existing DA methods simply use pretrained models (e.g., AlexNet, ResNet) for feature extraction, which are convolutional models that are trapped in localized features and fail to acquire long-distance dependencies. Furthermore, many approaches depend too much on pseudo-labels, which can impair adaptation efficiency and lead to unstable and inconsistent results. In this research, we present S(2)AC, a novel approach for unsupervised deep domain adaptation, that makes use of a stacked attention architecture as a feature map extractor. Our method can fuse domain discrepancy with minimizing a linear transformation of the second statistics (covariances) extended by the p-norm, while simultaneously designing pretext tasks on heuristics to improve the generality of the learning representation. In addition, we have developed a new trainable relative position embedding that not only reduces the model parameters but also enhances model accuracy and expedites the training process. To illustrate our method's efficacy and controllability, we designed extensive experiments based on the Office31, Office_Caltech_10, and OfficeHome datasets. To the best of our knowledge, the proposed method is the first attempt at incorporating attention-based networks and self-supervised learning for image domain adaptation, and has shown promising results.
引用
收藏
页数:19
相关论文
共 31 条
  • [11] Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles
    Noroozi, Mehdi
    Favaro, Paolo
    [J]. COMPUTER VISION - ECCV 2016, PT VI, 2016, 9910 : 69 - 84
  • [12] Semantic Image Synthesis with Spatially-Adaptive Normalization
    Park, Taesung
    Liu, Ming-Yu
    Wang, Ting-Chun
    Zhu, Jun-Yan
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2332 - 2341
  • [13] Assessing the Impact of Attention and Self-Attention Mechanisms on the Classification of Skin Lesions
    Pedro, Rafael
    Oliveira, Arlindo L.
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [14] Ramachandran P, 2019, Arxiv, DOI arXiv:1906.05909
  • [15] Adapting Visual Category Models to New Domains
    Saenko, Kate
    Kulis, Brian
    Fritz, Mario
    Darrell, Trevor
    [J]. COMPUTER VISION-ECCV 2010, PT IV, 2010, 6314 : 213 - +
  • [16] Shaw P, 2018, Arxiv, DOI [arXiv:1803.02155, DOI 10.48550/ARXIV.1803.02155]
  • [17] Improving predictive inference under covariate shift by weighting the log-likelihood function
    Shimodaira, H
    [J]. JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2000, 90 (02) : 227 - 244
  • [18] Song KP, 2022, Arxiv, DOI arXiv:2212.04473
  • [19] Sun BC, 2016, AAAI CONF ARTIF INTE, P2058
  • [20] Deep CORAL: Correlation Alignment for Deep Domain Adaptation
    Sun, Baochen
    Saenko, Kate
    [J]. COMPUTER VISION - ECCV 2016 WORKSHOPS, PT III, 2016, 9915 : 443 - 450