Multi-attribute Learning for Pedestrian Attribute Recognition in Surveillance Scenarios

被引:0
作者
Li, Dangwei [1 ,2 ]
Chen, Xiaotang [1 ,2 ]
Huang, Kaiqi [1 ,2 ,3 ]
机构
[1] CASIA, CRIPAC, Beijing, Peoples R China
[2] CASIA, NLPR, Beijing, Peoples R China
[3] CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing, Peoples R China
来源
PROCEEDINGS 3RD IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION ACPR 2015 | 2015年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In real video surveillance scenarios, visual pedestrian attributes, such as gender, backpack, clothes types, are very important for pedestrian retrieval and person re-identification. Existing methods for attributes recognition have two drawbacks: (a) handcrafted features (e.g. color histograms, local binary patterns) cannot cope well with the difficulty of real video surveillance scenarios; (b) the relationship among pedestrian attributes is ignored. To address the two drawbacks, we propose two deep learning based models to recognize pedestrian attributes. On the one hand, each attribute is treated as an independent component and the deep learning based single attribute recognition model (DeepSAR) is proposed to recognize each attribute one by one. On the other hand, to exploit the relationship among attributes, the deep learning framework which recognizes multiple attributes jointly (DeepMAR) is proposed. In the DeepMAR, one attribute can contribute to the representation of other attributes. For example, the gender of woman can contribute to the representation of long hair and wearing skirt. Experiments on recent popular pedestrian attribute datasets illustrate that our proposed models achieve the state-of-the-art results.
引用
收藏
页码:111 / 115
页数:5
相关论文
共 50 条
[21]   Exploring attribute localization and correlation for pedestrian attribute recognition [J].
Weng, Dunfang ;
Tan, Zichang ;
Fang, Liwei ;
Guo, Guodong .
NEUROCOMPUTING, 2023, 531 :140-150
[22]   ALFormer: Attribute Localization Transformer in Pedestrian Attribute Recognition [J].
Liu, Yuxin ;
Wang, Mingzhe ;
Li, Chao ;
Liu, Shuoyan .
COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2024, 21 (04) :1567-1582
[23]   Inter-Attribute awareness for pedestrian attribute recognition [J].
Wu, Junyi ;
Huang, Yan ;
Gao, Zhipeng ;
Hong, Yating ;
Zhao, Jianqiang ;
Du, Xinsheng .
PATTERN RECOGNITION, 2022, 131
[24]   ATTRIBUTE-AWARE NETWORK FOR PEDESTRIAN ATTRIBUTE RECOGNITION [J].
Wu, Zesen ;
Ye, Mang ;
Chen, Shuoyi ;
Du, Bo .
2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS, ICMEW 2024, 2024,
[25]   MULTI-LEVEL BASED PEDESTRIAN ATTRIBUTE RECOGNITION [J].
Yan, Hua-Rui ;
Zhan, Jin-Yu ;
Li, Fan ;
Zhang, Ting ;
Li, Na ;
Li, Zu-Ning .
2019 16TH INTERNATIONAL COMPUTER CONFERENCE ON WAVELET ACTIVE MEDIA TECHNOLOGY AND INFORMATION PROCESSING (ICWAMTIP), 2019, :166-169
[26]   A FUZZY MULTI-ATTRIBUTE METHOD FOR THE EVALUATION OF WEEE MANAGEMENT SCENARIOS [J].
Ucuncuoglu, Can ;
Ulukan, H. Ziya .
COMPUTATIONAL INTELLIGENCE: FOUNDATIONS AND APPLICATIONS: PROCEEDINGS OF THE 9TH INTERNATIONAL FLINS CONFERENCE, 2010, 4 :348-354
[27]   Grouping Attribute Recognition for Pedestrian with Joint Recurrent Learning [J].
Zhao, Xin ;
Sang, Liufang ;
Ding, Guiguang ;
Guo, Yuchen ;
Jin, Xiaoming .
PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, :3177-3183
[28]   Pedestrian attribute recognition: A survey [J].
Wang, Xiao ;
Zheng, Shaofei ;
Yang, Rui ;
Zheng, Aihua ;
Chen, Zhe ;
Tang, Jin ;
Luo, Bin .
PATTERN RECOGNITION, 2022, 121
[29]   A Framework for Pedestrian Attribute Recognition Using Deep Learning [J].
Sakib, Saadman ;
Deb, Kaushik ;
Dhar, Pranab Kumar ;
Kwon, Oh-Jin .
APPLIED SCIENCES-BASEL, 2022, 12 (02)
[30]   Joint Pedestrian Detection and Attribute Recognition Feature Learning [J].
Li, Ye ;
Jia, Zhaoqian ;
Ding, Yiyin ;
Shi, Fangyan ;
Yin, Guangqiang .
2021 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, INTERNET OF PEOPLE, AND SMART CITY INNOVATIONS (SMARTWORLD/SCALCOM/UIC/ATC/IOP/SCI 2021), 2021, :565-572