A practical object detection-based multiscale attention strategy for person reidentification

被引:0
作者
Zhang, Bin [1 ]
Song, Zhenyu [1 ]
Huang, Xingping [1 ]
Qian, Jin [1 ]
Cai, Chengfei [1 ]
机构
[1] Taizhou Univ, Coll Informat Engn, Taizhou 225300, Peoples R China
来源
ELECTRONIC RESEARCH ARCHIVE | 2024年 / 32卷 / 12期
关键词
person reidentification; object detection; YOLOv7; multiscale attention strategy; NETWORK;
D O I
10.3934/era.2024317
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
In person reidentification (PReID) tasks, challenges such as occlusion and small object sizes frequently arise. High-precision object detection methods can accurately locate small objects, while attention mechanisms help focus on the strong feature regions of objects. These approaches mitigate the mismatches caused by occlusion and small objects to some extent. This paper proposes a PReID method based on object detection and attention mechanisms (ODAMs) to achieve enhanced object matching accuracy. In the proposed ODAM-based PReID system, You Only Look Once version 7 (YOLOv7) was utilized as the detection algorithm, and a size attention mechanism was integrated into the backbone network to further improve the detection accuracy of the model. To conduct feature extraction, ResNet-50 was employed as the base network and augmented with residual attention mechanisms (RAMs) for PReID. This network emphasizes the key local information of the target object, enabling the extraction of more effective features. Extensive experimental results demonstrate that the proposed method achieves a mean average precision (mAP) value of 90.1% and a Rank-1 accuracy of 97.2% on the Market-1501 dataset, as well as an mAP of 82.3% and a Rank-1 accuracy of 91.4% on the DukeMTMC-reID dataset. The proposed PReID method offers significant practical value for intelligent surveillance systems. By integrating multiscale attention and RAMs, this method enhances both its object detection accuracy and its feature extraction robustness, enabling a more efficient individual identification process in complex scenes. These improvements are crucial for enhancing the real-time performance and accuracy of video surveillance systems, thus providing effective technical support for intelligent monitoring and security applications.
引用
收藏
页码:6772 / 6791
页数:20
相关论文
共 42 条
[1]  
Almazan J, 2018, Arxiv, DOI arXiv:1801.05339
[2]   A survey of approaches and trends in person re-identification [J].
Bedagkar-Gala, Apurva ;
Shah, Shishir K. .
IMAGE AND VISION COMPUTING, 2014, 32 (04) :270-286
[3]   Video Person Re-identification with Competitive Snippet-similarity Aggregation and Co-attentive Snippet Embedding [J].
Chen, Dapeng ;
Li, Hongsheng ;
Xiao, Tong ;
Yi, Shuai ;
Wang, Xiaogang .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :CP1-CP99
[4]   Person Re-Identification by Symmetry-Driven Accumulation of Local Features [J].
Farenzena, M. ;
Bazzani, L. ;
Perina, A. ;
Murino, V. ;
Cristani, M. .
2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, :2360-2367
[5]  
Gray D., 2007, P IEEE INT WORKSH PE, V3, P1
[6]   Neural Graph Matching Networks for Fewshot 3D Action Recognition [J].
Guo, Michelle ;
Chou, Edward ;
Huang, De-An ;
Song, Shuran ;
Yeung, Serena ;
Li Fei-Fei .
COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 :673-689
[7]  
Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
[8]   Human Semantic Parsing for Person Re-identification [J].
Kalayeh, Mahdi M. ;
Basaran, Emrah ;
Gokmen, Muhittin ;
Kamasak, Mustafa E. ;
Shah, Mubarak .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1062-1071
[9]   Triple Adversarial Learning and Multi-View Imaginative Reasoning for Unsupervised Domain Adaptation Person Re-Identification [J].
Li, Huafeng ;
Dong, Neng ;
Yu, Zhengtao ;
Tao, Dapeng ;
Qi, Guanqiu .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (05) :2814-2830
[10]  
Li W., 2012, LNCS, P31, DOI [DOI 10.1007/978-3-642-37331-2_3, 10.1007/978-3-642-37331-23]