On the Importance of Attention and Augmentations for Hypothesis Transfer in Domain Adaptation and Generalization

被引:3
|
作者
Thomas, Georgi [1 ]
Sahay, Rajat [1 ]
Jahan, Chowdhury Sadman [1 ]
Manjrekar, Mihir [1 ]
Popp, Dan [1 ]
Savakis, Andreas [1 ]
机构
[1] Rochester Inst Technol, Rochester, NY 14623 USA
关键词
domain adaptation; domain generalization; vision transformers; convolutional neural networks;
D O I
10.3390/s23208409
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Unsupervised domain adaptation (UDA) aims to mitigate the performance drop due to the distribution shift between the training and testing datasets. UDA methods have achieved performance gains for models trained on a source domain with labeled data to a target domain with only unlabeled data. The standard feature extraction method in domain adaptation has been convolutional neural networks (CNNs). Recently, attention-based transformer models have emerged as effective alternatives for computer vision tasks. In this paper, we benchmark three attention-based architectures, specifically vision transformer (ViT), shifted window transformer (SWIN), and dual attention vision transformer (DAViT), against convolutional architectures ResNet, HRNet and attention-based ConvNext, to assess the performance of different backbones for domain generalization and adaptation. We incorporate these backbone architectures as feature extractors in the source hypothesis transfer (SHOT) framework for UDA. SHOT leverages the knowledge learned in the source domain to align the image features of unlabeled target data in the absence of source domain data, using self-supervised deep feature clustering and self-training. We analyze the generalization and adaptation performance of these models on standard UDA datasets and aerial UDA datasets. In addition, we modernize the training procedure commonly seen in UDA tasks by adding image augmentation techniques to help models generate richer features. Our results show that ConvNext and SWIN offer the best performance, indicating that the attention mechanism is very beneficial for domain generalization and adaptation with both transformer and convolutional architectures. Our ablation study shows that our modernized training recipe, within the SHOT framework, significantly boosts performance on aerial datasets.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] On the Importance of Domain Adaptation in Texture Classification
    Caputo, Barbara
    Cusano, Claudio
    Lanzi, Martina
    Napoletano, Paolo
    Schettini, Raimondo
    IMAGE ANALYSIS AND PROCESSING,(ICIAP 2017), PT I, 2017, 10484 : 380 - 390
  • [32] Scatter Component Analysis: A Unified Framework for Domain Adaptation and Domain Generalization
    Ghifary, Muhammad
    Balduzzi, David
    Kleijn, W. Bastiaan
    Zhang, Mengjie
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (07) : 1414 - 1430
  • [33] Complementary Domain Adaptation and Generalization for Unsupervised Continual Domain Shift Learning
    Cho, Wonguk
    Park, Jinha
    Kim, Taesup
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11408 - 11418
  • [34] CNNs with Multi-Level Attention for Domain Generalization
    Ballas, Aristotelis
    Diou, Cristos
    PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, : 592 - 596
  • [35] Domain Adaptation and Generalization: A Low-Complexity Approach
    Niemeijer, Joshua
    Schaefer, Joerg P.
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 1081 - 1091
  • [36] Alleviating the generalization issue in adversarial domain adaptation networks
    Zhe, Xiao
    Du, Zhekai
    Lou, Chunwei
    Li, Jingjing
    IMAGE AND VISION COMPUTING, 2023, 135
  • [37] Certifying Better Robust Generalization for Unsupervised Domain Adaptation
    Gao, Zhicliang
    Zhang, Shufei
    Huang, Kaizhu
    Wang, Qiufeng
    Zhang, Rui
    Zhong, Chaoliang
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 2399 - 2410
  • [38] Collaborative Optimization and Aggregation for Decentralized Domain Generalization and Adaptation
    Wu, Guile
    Gong, Shaogang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6464 - 6473
  • [39] Improved Test-Time Adaptation for Domain Generalization
    Chen, Liang
    Zhang, Yong
    Song, Yibing
    Shan, Ying
    Liu, Lingqiao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24172 - 24182
  • [40] Contrastive Class-aware Adaptation for Domain Generalization
    Chen, Tianle
    Baktashmotlagh, Mahsa
    Salzmann, Mathieu
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 4871 - 4876