DFTI: Dual-Branch Fusion Network Based on Transformer and Inception for Space Noncooperative Objects

被引:0
作者
Zhang, Zhao [1 ]
Zhou, Dong [1 ]
Sun, Guanghui [1 ]
Hu, YuHui [1 ]
Deng, Runran [2 ]
机构
[1] Harbin Inst Technol, Dept Control Sci & Engn, Harbin 150001, Peoples R China
[2] Beijing Inst Spacecraft Syst Engn, Beijing 100094, Peoples R China
基金
中国国家自然科学基金;
关键词
Space vehicles; Feature extraction; Image fusion; Transformers; Task analysis; Visualization; Training; Autoencoder network; deep learning; image fusion; space noncooperative object; transformer; VISIBLE IMAGE FUSION;
D O I
10.1109/TIM.2024.3403182
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to adverse illumination in space, noncooperative object perception based on multisource image fusion is crucial for on-orbit maintenance and orbital debris removal. In this article, we first propose a dual-branch multiscale feature extraction encoder combining Transformer block (TB) and Inception block (IB) to extract global and local features of visible and infrared images and establish high-dimensional semantic connections. Second, different from the traditional artificial design fusion strategy, we propose a feature fusion module called cross-convolution feature fusion (CCFF) module, which can achieve image feature level fusion. Based on the above, we propose a dual-branch fusion network based on Transformer and Inception (DFTI) for space noncooperative object, which is an image fusion framework based on autoencoder architecture and unsupervised learning. The fusion image can simultaneously retain the color texture details and contour energy information of space noncooperative objects. Finally, we construct a fusion dataset of infrared and visible images for space noncooperative objects (FIV-SNO) and compare DFTI with seven state-of-the-art methods. In addition, object tracking as a follow-up high-level visual task proves the effectiveness of our method. The experimental results demonstrate that compared with other advanced methods, the entropy (EN) and average gradient (AG) of the fusing images using DFTI network are increased by 0.11 and 0.06, respectively. Our method exhibits excellent performance in both quantitative measures and qualitative evaluation.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Fusformer: A Transformer-Based Fusion Network for Hyperspectral Image Super-Resolution
    Hu, Jin-Fan
    Huang, Ting-Zhu
    Deng, Liang-Jian
    Dou, Hong-Xia
    Hong, Danfeng
    Vivone, Gemine
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [32] CGTD-Net: Channel-Wise Global Transformer-Based Dual-Branch Network for Industrial Strip Steel Surface Defect Detection
    Liu, Huan
    Chen, Chao
    Hu, Ruikuan
    Bin, Junchi
    Dong, Haobin
    Liu, Zheng
    IEEE SENSORS JOURNAL, 2024, 24 (04) : 4863 - 4873
  • [33] CFIFusion: Dual-Branch Complementary Feature Injection Network for Medical Image Fusion
    Xie, Yiyuan
    Yu, Lei
    Ding, Cheng
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2024, 34 (04)
  • [34] A Dual-Branch Detail Extraction Network for Hyperspectral Pansharpening
    Qu, Jiahui
    Hou, Shaoxiong
    Dong, Wenqian
    Xiao, Song
    Du, Qian
    Li, Yunsong
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [35] PFRNet: Dual-Branch Progressive Fusion Rectification Network for Monaural Speech Enhancement
    Yu, Runxiang
    Zhao, Ziwei
    Ye, Zhongfu
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2358 - 2362
  • [36] HaarFuse: A dual-branch infrared and visible light image fusion network based on Haar wavelet transform
    Wang, Yuequn
    Liu, Jie
    Wang, Jianli
    Yang, Leqiang
    Dong, Bo
    Li, Zhengwei
    PATTERN RECOGNITION, 2025, 164
  • [37] RFTNet: Region-Attention Fusion Network Combined with Dual-Branch Vision Transformer for Multimodal Brain Tumor Image Segmentation
    Jiao, Chunxia
    Yang, Tiejun
    Yan, Yanghui
    Yang, Aolin
    ELECTRONICS, 2024, 13 (01)
  • [38] DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion
    Wei, Qiancheng
    Liu, Ying
    Jiang, Xiaoping
    Zhang, Ben
    Su, Qiya
    Yu, Muyao
    REMOTE SENSING, 2024, 16 (10)
  • [39] Rain removal method for single image of dual-branch joint network based on sparse transformer
    Qin, Fangfang
    Jia, Zongpu
    Pang, Xiaoyan
    Zhao, Shan
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (01)
  • [40] A CNN- and Transformer-Based Dual-Branch Network for Change Detection with Cross-Layer Feature Fusion and Edge Constraints
    Wang, Xiaofeng
    Guo, Zhongyu
    Feng, Ruyi
    REMOTE SENSING, 2024, 16 (14)