DFTI: Dual-Branch Fusion Network Based on Transformer and Inception for Space Noncooperative Objects

被引:0
|
作者
Zhang, Zhao [1 ]
Zhou, Dong [1 ]
Sun, Guanghui [1 ]
Hu, YuHui [1 ]
Deng, Runran [2 ]
机构
[1] Harbin Inst Technol, Dept Control Sci & Engn, Harbin 150001, Peoples R China
[2] Beijing Inst Spacecraft Syst Engn, Beijing 100094, Peoples R China
基金
中国国家自然科学基金;
关键词
Space vehicles; Feature extraction; Image fusion; Transformers; Task analysis; Visualization; Training; Autoencoder network; deep learning; image fusion; space noncooperative object; transformer; VISIBLE IMAGE FUSION;
D O I
10.1109/TIM.2024.3403182
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to adverse illumination in space, noncooperative object perception based on multisource image fusion is crucial for on-orbit maintenance and orbital debris removal. In this article, we first propose a dual-branch multiscale feature extraction encoder combining Transformer block (TB) and Inception block (IB) to extract global and local features of visible and infrared images and establish high-dimensional semantic connections. Second, different from the traditional artificial design fusion strategy, we propose a feature fusion module called cross-convolution feature fusion (CCFF) module, which can achieve image feature level fusion. Based on the above, we propose a dual-branch fusion network based on Transformer and Inception (DFTI) for space noncooperative object, which is an image fusion framework based on autoencoder architecture and unsupervised learning. The fusion image can simultaneously retain the color texture details and contour energy information of space noncooperative objects. Finally, we construct a fusion dataset of infrared and visible images for space noncooperative objects (FIV-SNO) and compare DFTI with seven state-of-the-art methods. In addition, object tracking as a follow-up high-level visual task proves the effectiveness of our method. The experimental results demonstrate that compared with other advanced methods, the entropy (EN) and average gradient (AG) of the fusing images using DFTI network are increased by 0.11 and 0.06, respectively. Our method exhibits excellent performance in both quantitative measures and qualitative evaluation.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Dual-Branch Remote Sensing Spatiotemporal Fusion Network Based on Selection Kernel Mechanism
    Li, Weisheng
    Wu, Fengyan
    Cao, Dongwen
    REMOTE SENSING, 2022, 14 (17)
  • [32] Tumor Classification Based on Approximate Symmetry Using Dual-Branch Complementary Fusion Network
    Yu, Mei
    Cheng, Minyutong
    Li, Xubin
    Liu, Zhiqiang
    Gao, Jie
    Fu, Xuzhou
    Li, Xuewei
    Yu, Ruiguo
    2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 1215 - 1215
  • [33] Dual-Branch Transformer Network for Enhancing LiDAR-Based Traversability Analysis in Autonomous Vehicles
    Shao, Shiliang
    Shi, Xianyu
    Han, Guangjie
    Wang, Ting
    Song, Chunhe
    Zhang, Qi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025, 26 (02) : 2582 - 2595
  • [34] Rain removal method for single image of dual-branch joint network based on sparse transformer
    Qin, Fangfang
    Jia, Zongpu
    Pang, Xiaoyan
    Zhao, Shan
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (01)
  • [35] Dual-Branch Multitask Fusion Network for Offline Chinese Writer Identification
    Wang, Haixia
    Mao, Yingyu
    Miao, Qingran
    Xiao, Qun
    Zhang, Yilong
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (02)
  • [36] Dual-Branch Multimodal Fusion Network for Driver Facial Emotion Recognition
    Wang, Le
    Chang, Yuchen
    Wang, Kaiping
    APPLIED SCIENCES-BASEL, 2024, 14 (20):
  • [37] Dual-Branch Fourier-Mixing Transformer Network for Hyperspectral Target Detection
    Jiao, Jinyue
    Gong, Zhiqiang
    Zhong, Ping
    REMOTE SENSING, 2023, 15 (19)
  • [38] A CNN- and Transformer-Based Dual-Branch Network for Change Detection with Cross-Layer Feature Fusion and Edge Constraints
    Wang, Xiaofeng
    Guo, Zhongyu
    Feng, Ruyi
    REMOTE SENSING, 2024, 16 (14)
  • [39] MDTrans: Multi-scale and dual-branch feature fusion network based on Swin Transformer for building extraction in remote sensing images
    Diao, Kuo
    Zhu, Jinlong
    Liu, Guangjie
    Li, Meng
    IET IMAGE PROCESSING, 2024, 18 (11) : 2930 - 2942
  • [40] Dual-branch feature extraction network combined with Transformer and CNN for polyp segmentation
    Liu, Qiaohong
    Lin, Yuanjie
    Han, Xiaoxiang
    Chen, Keyan
    Zhang, Weikun
    Yang, Hui
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2024, 34 (01)