Cross-Modal Transformers for Infrared and Visible Image Fusion

被引:59
作者
Park, Seonghyun [1 ,2 ]
Vien, An Gia [1 ]
Lee, Chul [1 ]
机构
[1] Dongguk Univ, Dept Multimedia Engn, Seoul 04620, South Korea
[2] Hanwha Syst Co Ltd, Intelligence Software Team, Seongnam Si 13524, Gyeonggi Do, South Korea
基金
新加坡国家研究基金会;
关键词
Feature extraction; Transformers; Image fusion; Convolution; Task analysis; Data mining; Computer vision; transformer; self-attention; infrared image; visible image; FACIAL EXPRESSION RECOGNITION; REPRESENTATION; ATTENTION; FEATURES; NETWORK; JOINT;
D O I
10.1109/TCSVT.2023.3289170
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Image fusion techniques aim to generate more informative images by merging multiple images of different modalities with complementary information. Despite significant fusion performance improvements of recent learning-based approaches, most fusion algorithms have been developed based on convolutional neural networks (CNNs), which stack deep layers to obtain a large receptive field for feature extraction. However, important details and contexts of the source images may be lost through a series of convolution layers. In this work, we propose a cross-modal transformer-based fusion (CMTFusion) algorithm for infrared and visible image fusion that captures global interactions by faithfully extracting complementary information from source images. Specifically, we first extract the multiscale feature maps of infrared and visible images. Then, we develop cross-modal transformers (CMTs) to retain complementary information in the source images by removing redundancies in both the spatial and channel domains. To this end, we design a gated bottleneck that integrates cross-domain interaction to consider the characteristics of the source images. Finally, a fusion result is obtained by exploiting spatial-channel information in refined feature maps using a fusion block. Experimental results on multiple datasets demonstrate that the proposed algorithm provides better fusion performance than state-of-the-art infrared and visible image fusion algorithms, both quantitatively and qualitatively. Furthermore, we show that the proposed algorithm can be used to improve the performance of computer vision tasks, e.g., object detection and monocular depth estimation.
引用
收藏
页码:770 / 785
页数:16
相关论文
共 82 条
[1]   Emotion Recognition in Speech using Cross-Modal Transfer in the Wild [J].
Albanie, Samuel ;
Nagrani, Arsha ;
Vedaldi, Andrea ;
Zisserman, Andrew .
PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, :292-301
[2]  
Amos B., 2016, CMU School Comput. Sci., V6, P20
[3]  
[Anonymous], 2011, P CVPR
[4]   THIN: THrowable Information Networks and Application for Facial Expression Recognition in the Wild [J].
Arnaud, Estephe ;
Dapogny, Arnaud ;
Bailly, Kevin .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) :2336-2348
[5]   Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution [J].
Barsoum, Emad ;
Zhang, Cha ;
Ferrer, Cristian Canton ;
Zhang, Zhengyou .
ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, :279-283
[6]   Island Loss for Learning Discriminative Features in Facial Expression Recognition [J].
Cai, Jie ;
Meng, Zibo ;
Khan, Ahmed Shehab ;
Li, Zhiyuan ;
O'Reilly, James ;
Tong, Yan .
PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, :302-309
[7]  
Cao KD, 2019, ADV NEUR IN, V32
[8]  
Chen J., 2019, IEEE VIS COMMUNIMAGE, P1
[9]   Cross-Domain Facial Expression Recognition: A Unified Evaluation Benchmark and Adversarial Graph Learning [J].
Chen, Tianshui ;
Pu, Tao ;
Wu, Hefeng ;
Xie, Yuan ;
Liu, Lingbo ;
Lin, Liang .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) :9887-9903
[10]   Knowledge-Guided Multi-Label Few-Shot Learning for General Image Recognition [J].
Chen, Tianshui ;
Lin, Liang ;
Chen, Riquan ;
Hui, Xiaolu ;
Wu, Hefeng .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (03) :1371-1384