On the Importance of Attention and Augmentations for Hypothesis Transfer in Domain Adaptation and Generalization

被引:3
|
作者
Thomas, Georgi [1 ]
Sahay, Rajat [1 ]
Jahan, Chowdhury Sadman [1 ]
Manjrekar, Mihir [1 ]
Popp, Dan [1 ]
Savakis, Andreas [1 ]
机构
[1] Rochester Inst Technol, Rochester, NY 14623 USA
关键词
domain adaptation; domain generalization; vision transformers; convolutional neural networks;
D O I
10.3390/s23208409
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Unsupervised domain adaptation (UDA) aims to mitigate the performance drop due to the distribution shift between the training and testing datasets. UDA methods have achieved performance gains for models trained on a source domain with labeled data to a target domain with only unlabeled data. The standard feature extraction method in domain adaptation has been convolutional neural networks (CNNs). Recently, attention-based transformer models have emerged as effective alternatives for computer vision tasks. In this paper, we benchmark three attention-based architectures, specifically vision transformer (ViT), shifted window transformer (SWIN), and dual attention vision transformer (DAViT), against convolutional architectures ResNet, HRNet and attention-based ConvNext, to assess the performance of different backbones for domain generalization and adaptation. We incorporate these backbone architectures as feature extractors in the source hypothesis transfer (SHOT) framework for UDA. SHOT leverages the knowledge learned in the source domain to align the image features of unlabeled target data in the absence of source domain data, using self-supervised deep feature clustering and self-training. We analyze the generalization and adaptation performance of these models on standard UDA datasets and aerial UDA datasets. In addition, we modernize the training procedure commonly seen in UDA tasks by adding image augmentation techniques to help models generate richer features. Our results show that ConvNext and SWIN offer the best performance, indicating that the attention mechanism is very beneficial for domain generalization and adaptation with both transformer and convolutional architectures. Our ablation study shows that our modernized training recipe, within the SHOT framework, significantly boosts performance on aerial datasets.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] Domain Adaptation and Generalization of Functional Medical Data: A Systematic Survey of Brain Data
    Sarafraz, Gita
    Behnamnia, Armin
    Hosseinzadeh, Mehran
    Balapour, Ali
    Meghrazi, Amin
    Rabiee, Hamid R.
    ACM COMPUTING SURVEYS, 2024, 56 (10)
  • [32] Transferable attention networks for adversarial domain adaptation
    Zhang, Changchun
    Zhao, Qingjie
    Wang, Yu
    INFORMATION SCIENCES, 2020, 539 : 422 - 433
  • [33] Unsupervised Domain Adaptation via Importance Sampling
    Xu, Xuemiao
    He, Hai
    Zhang, Huaidong
    Xu, Yangyang
    He, Shengfeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (12) : 4688 - 4699
  • [34] DOMAIN ADAPTATION FOR GENERALIZATION OF FACE PRESENTATION ATTACK DETECTION IN MOBILE SETTINGS WITH MINIMAL INFORMATION
    Mohammadi, Amir
    Bhattacharjee, Sushil
    Marcel, Sebastien
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1001 - 1005
  • [35] Domain-Agnostic Priors for Semantic Segmentation Under Unsupervised Domain Adaptation and Domain Generalization
    Huo, Xinyue
    Xie, Lingxi
    Hu, Hengtong
    Zhou, Wengang
    Li, Houqiang
    Tian, Qi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (09) : 3954 - 3976
  • [36] Transfer Domain Class Clustering for Unsupervised Domain Adaptation
    Fan, Yunxin
    Yan, Gang
    Li, Shuang
    Song, Shiji
    Wang, Wei
    Peng, Xinping
    PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON ELECTRICAL AND INFORMATION TECHNOLOGIES FOR RAIL TRANSPORTATION (EITRT) 2017: ELECTRICAL TRACTION, 2018, 482 : 827 - 835
  • [37] Force Myography-Based Human Robot Interactions via Deep Domain Adaptation and Generalization
    Zakia, Umme
    Menon, Carlo
    SENSORS, 2022, 22 (01)
  • [38] Correlation alignment with attention mechanism for unsupervised domain adaptation
    Chen, Rong
    Ren, Chongguang
    WEB INTELLIGENCE, 2020, 18 (04) : 261 - 267
  • [39] Attention Guided Multiple Source and Target Domain Adaptation
    Wang, Yuxi
    Zhang, Zhaoxiang
    Hao, Wangli
    Song, Chunfeng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 892 - 906
  • [40] Exploring Category Attention for Open Set Domain Adaptation
    Wang, Jinghua
    IEEE ACCESS, 2021, 9 : 9154 - 9162