Multi-Label-Based Similarity Learning for Vehicle Re-Identification

被引:20
作者
Alfasly, Saghir [1 ,4 ]
Hu, Yongjian [1 ,2 ]
Li, Haoliang [3 ]
Liang, Tiancai [4 ]
Jin, Xiaofeng [4 ]
Liu, Beibei [1 ,2 ]
Zhao, Qingli [4 ]
机构
[1] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou 510640, Guangdong, Peoples R China
[2] Sino Singapore Int Joint Res Inst, Guangzhou 510700, Guangdong, Peoples R China
[3] Nanyang Technol Univ, Rapid Rich Object Search Lab, Singapore 639798, Singapore
[4] GRG Intelligent Secur Inst, Guangzhou 510006, Guangdong, Peoples R China
关键词
Deep convolutional neural network; discriminative features; multi-label-based similarity learning; metric learning; vehicle re-identification;
D O I
10.1109/ACCESS.2019.2948965
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The massive attention to the surveillance video-based analysis makes the vehicle reidentification one of the current hot areas of interest to study. Extracting discriminative visual representations for vehicle re-identification is a challenging task due to the low-variance among the vehicles that share same model, brand, type, and color. Recently, several methods have been proposed for vehicle re-identification, that either use feature learning or metric learning approach. However, designing an efficient and cost-effective model is significantly demanded. In this paper, we propose multi-label-based similarity learning (MLSL) for vehicle re-identification obtaining an efficient deep-learning-based model that derives robust vehicle representations. Overall, our model features two main parts. First, a multi-label-based similarity learner that employs Siamese network on three different attributes of the vehicles: vehicle ID, color, and type. The second part is a regular CNN-based feature learner that employed to learn feature representations with vehicle ID attribute. The model is trained jointly with both parts. In order to validate the effectiveness of our model, a set of extensive experiments has been conducted on three of the largest well-known datasets VeRi-776, VehicleID, and VERI-Wild datasets. Furthermore, the parts of the proposed model are validated by exploring the infiuence of each part on the entire model performance. The results prove the superiority of our model over multiple state-of-the-art methods on the three mentioned datasets.
引用
收藏
页码:162605 / 162616
页数:12
相关论文
共 48 条
[1]   Learning to detect objects in images via a sparse, part-based representation [J].
Agarwal, S ;
Awan, A ;
Roth, D .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2004, 26 (11) :1475-1490
[2]   A self-probing, gate-controlled, molecularly imprinted electrochemical sensor for ultrasensitive determination of p-nonylphenol [J].
Ai, Jiebing ;
Guo, Hao ;
Xue, Rui ;
Wang, Xiao ;
Lei, Xi ;
Yang, Wu .
ELECTROCHEMISTRY COMMUNICATIONS, 2018, 89 :1-5
[3]  
Alfasly SAS, 2019, IEEE IMAGE PROC, P3118, DOI [10.1109/ICIP.2019.8803366, 10.1109/icip.2019.8803366]
[4]  
[Anonymous], IEEE T INTELL TRANSP
[5]  
[Anonymous], P IEEE C COMP VIS PA
[6]  
[Anonymous], 2017, ARXIV170404861
[7]  
[Anonymous], 2018, PATTERN RECOGN
[8]  
[Anonymous], 2019, ICCV
[9]  
[Anonymous], PROC CVPR IEEE
[10]  
[Anonymous], P IEEE INT C COMP VI