Rethinking attention mechanism for enhanced pedestrian attribute recognition

被引:0
|
作者
Wu, Junyi [1 ,2 ]
Huang, Yan [3 ]
Gao, Min [4 ,5 ]
Niu, Yuzhen [1 ,2 ]
Chen, Yuzhong [1 ,2 ]
Wu, Qiang
机构
[1] Fuzhou Univ, Fujian Key Lab Network Comp & Intelligent Informat, Coll Comp & Data Sci, Fuzhou 350108, Fujian, Peoples R China
[2] Minist Educ, Engn Res Ctr BigData Intelligence, Fuzhou 350108, Fujian, Peoples R China
[3] Univ Technol Sydney, Australian Artificial Intelligence Inst, Sydney, NSW 2007, Australia
[4] Fuzhou Univ, Coll Phys & Informat Engn, Fujian Key Lab Intelligent Proc & Wireless Transmi, Fuzhou 350108, Fujian, Peoples R China
[5] Univ Technol Sydney, Sch Elect & Data Engn, Sydney, NSW 2007, Australia
基金
中国国家自然科学基金;
关键词
Pedestrian attribute recognition; Attention mechanism; Attention-aware regularization;
D O I
10.1016/j.neucom.2025.130236
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pedestrian Attribute Recognition (PAR) plays a crucial role in various computer vision applications, demanding precise and reliable identification of attributes from pedestrian images. Traditional PAR methods, though effective in leveraging attention mechanisms, often suffer from the lack of direct supervision on attention, leading to potential overfitting and misallocation. This paper introduces a novel and model-agnostic approach, Attention-Aware Regularization (AAR), which rethinks the attention mechanism by integrating causal reasoning to provide direct supervision of attention maps. AAR employs perturbation techniques and a unique optimization objective to assess and refine attention quality, encouraging the model to prioritize attribute-specific regions. Our method demonstrates significant improvement in PAR performance by mitigating the effects of incorrect attention and fostering a more effective attention mechanism. Experiments on standard datasets showcase the superiority of our approach over existing methods, setting a new benchmark for attention-driven PAR models.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Pedestrian Attribute Recognition Based on Dual Self-attention Mechanism
    Fan, Zhongkui
    Guan, Ye-peng
    COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2023, 20 (02) : 793 - 812
  • [2] Explicit Attention Modeling for Pedestrian Attribute Recognition
    Fang, Jinyi
    Zhu, Bingke
    Chen, Yingying
    Wang, Jinqiao
    Tang, Ming
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2075 - 2080
  • [3] Pedestrian Attribute Recognition in Video Surveillance Scenarios Based on View-attribute Attention Localization
    Chen, Wei-Chen
    Yu, Xin-Yi
    Ou, Lin-Lin
    MACHINE INTELLIGENCE RESEARCH, 2022, 19 (02) : 153 - 168
  • [4] Pedestrian Attribute Recognition in Video Surveillance Scenarios Based on View-attribute Attention Localization
    Wei-Chen Chen
    Xin-Yi Yu
    Lin-Lin Ou
    Machine Intelligence Research, 2022, 19 : 153 - 168
  • [5] Pedestrian attribute recognition based on attribute correlation
    Zhao, Ruijie
    Lang, Congyan
    Li, Zun
    Liang, Liqian
    Wei, Lili
    Feng, Songhe
    Wang, Tao
    MULTIMEDIA SYSTEMS, 2022, 28 (03) : 1069 - 1081
  • [6] Pedestrian attribute recognition based on attribute correlation
    Ruijie Zhao
    Congyan Lang
    Zun Li
    Liqian Liang
    Lili Wei
    Songhe Feng
    Tao Wang
    Multimedia Systems, 2022, 28 : 1069 - 1081
  • [7] Attention Based CNN-ConvLSTM for Pedestrian Attribute Recognition
    Li, Yang
    Xu, Huahu
    Bian, Minjie
    Xiao, Junsheng
    SENSORS, 2020, 20 (03)
  • [8] Skeleton-Based Attention Mask for Pedestrian Attribute Recognition Network
    Sooksatra, Sorn
    Rujikietgumjorn, Sitapa
    JOURNAL OF IMAGING, 2021, 7 (12)
  • [9] Multi-Task Collaborative Attention Network for Pedestrian Attribute Recognition
    Cao, Junliang
    Wei, Hua
    Sun, Yongli
    Zhao, Zhifeng
    Wang, Wei
    Sun, Guangze
    Wang, Gang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [10] Lightweight pedestrian attribute recognition algorithm based on multi-scale residual attention network
    Zhang Z.-T.
    Zhang R.-F.
    Liu Y.-H.
    Kongzhi yu Juece/Control and Decision, 2022, 37 (10): : 2487 - 2496