SSTN: Self-Supervised Domain Adaptation Thermal Object Detection for Autonomous Driving

被引:27
作者
Munir, Farzeen [1 ]
Azam, Shoaib [1 ]
Jeon, Moongu [1 ]
机构
[1] Gwangju Inst Sci & Technol, Sch Elect Engn & Comp Sci, Gwangju, South Korea
来源
2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2021年
关键词
Self-supervised learning; Contrastive learning; Thermal object detection;
D O I
10.1109/IROS51168.2021.9636353
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The perception of the environment plays a decisive role in the safe and secure operation of autonomous vehicles. The perception of the surrounding is way similar to human vision. The human's brain perceives the environment by utilizing different sensory channels and develop a view-invariant representation model. In this context, different exteroceptive sensors like cameras, Lidar, are deployed on the autonomous vehicle to perceive the environment. These sensors have illustrated their benefit in the visible spectrum domain yet in the adverse weather conditions; for instance, they have limited operational capability at night, leading to fatal accidents. This work explores thermal object detection to model a view-invariant model representation by employing the self-supervised contrastive learning approach. We have proposed a deep neural network Self Supervised Thermal Network (SSTN) for learning the feature embedding to maximize the information between visible and infrared spectrum domain by contrastive learning. Later, these learned feature representations are employed for thermal object detection using a multi-scale encoder-decoder transformer network. The proposed method is extensively evaluated on the two publicly available datasets: the FLIR-ADAS dataset and the KAIST Multi-Spectral dataset. The experimental results illustrate the efficacy of the proposed method.
引用
收藏
页码:206 / 213
页数:8
相关论文
共 50 条
[21]   Robust self-supervised learning for source-free domain adaptation [J].
Liang Tian ;
Lihua Zhou ;
Hao Zhang ;
Zhenbin Wang ;
Mao Ye .
Signal, Image and Video Processing, 2023, 17 :2405-2413
[22]   Unsupervised New-set Domain Adaptation with Self-supervised Knowledge [J].
Wang Y.-Y. ;
Sun G.-W. ;
Zhao G.-X. ;
Xue H. .
Ruan Jian Xue Bao/Journal of Software, 2022, 33 (04) :1170-1182
[23]   Robust self-supervised learning for source-free domain adaptation [J].
Tian, Liang ;
Zhou, Lihua ;
Zhang, Hao ;
Wang, Zhenbin ;
Ye, Mao .
SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (05) :2405-2413
[24]   Self-supervised domain adaptation for machinery remaining useful life prediction [J].
Le Xuan, Quy ;
Munderloh, Marco ;
Ostermann, Joern .
RELIABILITY ENGINEERING & SYSTEM SAFETY, 2024, 250
[25]   Self-Supervised Domain Adaptation for 6DoF Pose Estimation [J].
Jin, Juseong ;
Jeong, Eunju ;
Cho, Joonmyun ;
Kim, Young-Gon .
IEEE ACCESS, 2024, 12 :101528-101535
[26]   Towards JPEG-Resistant Image Forgery Detection and Localization Via Self-Supervised Domain Adaptation [J].
Rao, Yuan ;
Ni, Jiangqun ;
Zhang, Weizhe ;
Huang, Jiwu .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (05) :3285-3297
[27]   A Self-Supervised Approach for Enhanced Feature Representations in Object Detection Tasks [J].
Vilabella, Santiago C. ;
Perez-Nunez, Pablo ;
Remeseiro, Beatriz .
2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
[28]   Cuepervision: self-supervised learning for continuous domain adaptation without catastrophic forgetting [J].
Schutera, Mark ;
Hafner, Frank M. ;
Abhau, Jochen ;
Hagenmeyer, Veit ;
Mikut, Ralf ;
Reischl, Markus .
IMAGE AND VISION COMPUTING, 2021, 106
[29]   Generic network for domain adaptation based on self-supervised learning and deep clustering [J].
Baffour, Adu Asare ;
Qin, Zhen ;
Geng, Ji ;
Ding, Yi ;
Deng, Fuhu ;
Qin, Zhiguang .
NEUROCOMPUTING, 2022, 476 :126-136
[30]   Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [J].
Yuan, Jin ;
Hou, Feng ;
Du, Yangzhou ;
Shi, Zhongchao ;
Geng, Xin ;
Fan, Jianping ;
Rui, Yong .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, :3907-3916