Achieving domain generalization for underwater object detection by domain mixup and contrastive learning

被引:34
作者
Chen, Yang [1 ,2 ]
Song, Pinhao [1 ]
Liu, Hong [1 ]
Dai, Linhui [1 ]
Zhang, Xiaochuan [2 ]
Ding, Runwei [3 ]
Li, Shengquan [3 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Key Lab Machine Percept, Shenzhen 518055, Guangdong, Peoples R China
[2] Chongqing Univ Technol, Sch Artificial Intelligence, Chongqing 401135, Peoples R China
[3] Peng Cheng Lab, Shenzhen 518038, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain generalization; Underwater; Object detection; Image stylization; Contrastive learning; RECOGNITION;
D O I
10.1016/j.neucom.2023.01.053
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The performance of existing underwater object detection methods severely degrades when they face the domain shift caused by complicated underwater environments. Due to the limited domain diversity in collected data, deep detectors easily memorize a few seen domains, which leads to low generalization ability. There are two common ideas to improve the domain generalization performance. First, it can be inferred that the detector trained on as many domains as possible is domain-invariant. Second, their hidden features should be equivalent because the images with the same semantic content are in different domains. This paper further excavates these two ideas and proposes a domain generalization framework that learns how to generalize across domains from Domain Mixup and Contrastive Learning (DMCL). First, based on the formation of underwater images, an image in one kind of underwater environment is the linear transformation of another underwater environment. Therefore, a style transfer model, which out-puts a linear transformation matrix instead of the whole image, is proposed to transform images from one source domain to another, enriching the domain diversity of the training data. Second, the Mixup oper-ation interpolates different domains on the feature level, sampling new domains on the domain manifold. Third, a contrastive loss is selectively applied to features from different domains to force the model to learn domain-invariant features but retain the discriminative capacity. With our method, detectors will be robust to domain shift. Also, a domain generalization benchmark S-UODAC2020 for detection is set up to measure the performance of our method. Comprehensive experiments on S-UODAC2020 and two object recognition benchmarks (PACS and VLCS) demonstrate that the proposed method is able to learn domain-invariant representations and outperforms other domain generalization methods. The code is available in https://github.com/mousecpn/DMC-Domain-Generalization-for-Underwater-Object-Detection.gitCO 2023 Published by Elsevier B.V.
引用
收藏
页码:20 / 34
页数:15
相关论文
共 67 条
[1]  
Balaji Y, 2018, ADV NEUR IN, V31
[2]   Cascade R-CNN: Delving into High Quality Object Detection [J].
Cai, Zhaowei ;
Vasconcelos, Nuno .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6154-6162
[3]   Domain Generalization by Solving Jigsaw Puzzles [J].
Carlucci, Fabio M. ;
D'Innocente, Antonio ;
Bucci, Silvia ;
Caputo, Barbara ;
Tommasi, Tatiana .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2224-2233
[4]  
Caron M, 2020, ADV NEUR IN, V33
[5]  
Chen L, 2022, Arxiv, DOI arXiv:2010.10006
[6]  
Chen T, 2020, PR MACH LEARN RES, V119
[7]  
Chen XY, 2020, Arxiv, DOI arXiv:2003.01913
[8]   Domain Adaptive Faster R-CNN for Object Detection in the Wild [J].
Chen, Yuhua ;
Li, Wen ;
Sakaridis, Christos ;
Dai, Dengxin ;
Van Gool, Luc .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3339-3348
[9]   Underwater Image Enhancement by Wavelength Compensation and Dehazing [J].
Chiang, John Y. ;
Chen, Ying-Ching .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2012, 21 (04) :1756-1769
[10]  
D'Innocente Antonio, 2019, Pattern Recognition. 40th German Conference, GCPR 2018. Proceedings: Lecture Notes in Computer Science (LNCS 11269), P187, DOI 10.1007/978-3-030-12939-2_14