Correlation-and-Correction Fusion Attention Network for Occluded Pedestrian Detection

被引:6
作者
Zou, Fengmin [1 ]
Li, Xu [1 ]
Xu, Qimin [1 ]
Sun, Zhengliang [2 ]
Zhu, Jianxiao [1 ]
机构
[1] Southeast Univ, Sch Instrument Sci & Engn, Nanjing 210096, Peoples R China
[2] Minist Publ Secur, Traff Management Res Inst, Wuxi 214151, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Correlation; Sensors; Proposals; Fuses; Detectors; Head; Attention mechanism; crowded scenes; feature enhancement; fusion; pedestrian detection; DEEP FEATURES;
D O I
10.1109/JSEN.2023.3242082
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As a significant issue in computer vision, pedestrian detection has achieved certain achievements with the support of deep learning. However, pedestrian detection in congested scenes still encounters the challenging problem of feature loss and obfuscation. To address the issue, we propose a pedestrian detection network based on a correlation-and-correction fusion attention mechanism. First, a multimask correction attention module is proposed, which generates visible part masks of pedestrians, enhancing the visible region's features and correcting the false one. Besides, the module preserves the features of multiclass pedestrians by generating multiple masks. Then, we fuse a correlation channel attention module to enhance the correlation of various pedestrians' body features. Next, we studied three fusion methods of correlation and correction attention mechanisms and found that the serial connection of "correlation first and correction behind" works best. Finally, we extend our method to multiclass pedestrian detection in congested scenes. Experimental results on the CityPersons, Caltech, and CrowdHuman datasets demonstrate the effectiveness of our method. On the CityPersons dataset where more than 70% of pedestrians are occluded, our method outperforms the baseline method by 1.12% on the heavy occlusion subset and surpasses many outstanding methods.
引用
收藏
页码:6061 / 6073
页数:13
相关论文
共 50 条
  • [21] Cross-modality interactive attention network for multispectral pedestrian detection
    Zhang, Lu
    Liu, Zhiyong
    Zhang, Shifeng
    Yang, Xu
    Qiao, Hong
    Huang, Kaizhu
    Hussain, Amir
    INFORMATION FUSION, 2019, 50 : 20 - 29
  • [23] Attention and cross- scale fusion for vehicle and pedestrian detection
    Li Jian-dong
    Li Jia-qi
    Qu Hai-cheng
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2023, 38 (12) : 1707 - 1716
  • [24] Pedestrian Detection for Autonomous Cars: Inference Fusion of Deep Neural Networks
    Islam, Muhammad Mobaidul
    Newaz, Abdullah Al Redwan
    Karimoddini, Ali
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (12) : 23358 - 23368
  • [25] EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection
    Shen, Ying
    Xie, Xiaoyang
    Wu, Jing
    Chen, Liqiong
    Huang, Feng
    INFRARED PHYSICS & TECHNOLOGY, 2025, 145
  • [26] INSANet: INtra-INter Spectral Attention Network for Effective Feature Fusion of Multispectral Pedestrian Detection
    Lee, Sangin
    Kim, Taejoo
    Shin, Jeongmin
    Kim, Namil
    Choi, Yukyung
    SENSORS, 2024, 24 (04)
  • [27] Sequential Attention-Based Distinct Part Modeling for Balanced Pedestrian Detection
    Luo, Yan
    Zhang, Chongyang
    Lin, Weiyao
    Yang, Xiaokang
    Sun, Jun
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) : 15644 - 15654
  • [28] Attention-based Cross-Modality Multiscale Fusion for Multispectral Pedestrian Detection
    Hui, Zhou
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (11) : 1244 - 1253
  • [29] Research on Re-recognition Method of Multi-branch Fusion Attention Mechanism for Occluded Pedestrian
    Zhao, Haiyan
    Xu, Yan
    2023 8TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND BIG DATA ANALYTICS, ICCCBDA, 2023, : 477 - 480
  • [30] A multispectral feature fusion network for robust pedestrian detection
    Song, Xiaoru
    Gao, Song
    Chen, Chaobo
    ALEXANDRIA ENGINEERING JOURNAL, 2021, 60 (01) : 73 - 85