Egocentric Analysis of Dash-Cam Videos for Vehicle Forensics

被引:5
作者
Mehrish, Ambuj [1 ]
Singh, Prerna [1 ]
Jain, Puneet [1 ]
Subramanyam, A., V [1 ]
Kankanhalli, Mohan [2 ]
机构
[1] Indrapastha Inst Informat Technol, Dept Elect & Comp Engn, New Delhi 110020, India
[2] Natl Univ Singapore, Sch Comp, Singapore 117417, Singapore
关键词
Videos; Automobiles; Cameras; Accidents; Forensics; Sensors; Privacy; Vehicle forensics; blind deconvolution; bregman iteration; random forest; Ada boosting; egocentric video analysis; OBJECT RECOGNITION; PATTERN;
D O I
10.1109/TCSVT.2019.2929561
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Video acquisition using dashboard-mounted cameras has recently achieved massive popularity around the world. One of the major developments following the dash-cam's popularity is that videos captured by them can be used as testimony during scenarios, like traffic violations and accidents. The widespread deployment of dash-cams brings new problems ranging from the compromise of privacy by uploading these videos on public websites using videos captured from other cars for making fraudulent claims. Therefore, there is a compelling need to address the problems associated with the usage of dash-cam videos. In this paper, we discuss and highlight the importance of the emerging area of multimedia vehicle forensics. We propose an algorithm for linking a dash-cam video to a specific car. The proposed algorithm is useful for various applications, for example, insurance companies can authenticate the origin of video before processing the claim. In a different scenario of illegitimate video upload on the Web, the video can be traced back to the car it originated from. To this end, we make use of motion blur extracted from dash-cam videos for generating a discriminative feature. We observe that the subtle motion pattern of every vehicle can serve as its unique signature. We extract motion blur from dash-cam videos and use random forest trees for classifying the vehicle correctly. The experimental results on thousands of frames obtained from dash-cam videos of several cars show the effectiveness of our approach. We further investigate the process of forging the signature of a car and propose a counter forensics method to detect such forgery. Also, we discuss the application of our technique to other potential platforms where the camera can be mounted, for example, on the chest of a person. We believe that ours is the first work that describes this new area of research.
引用
收藏
页码:3000 / 3014
页数:15
相关论文
共 44 条
[1]   Classification of Artistic Styles Using Binarized Features Derived from a Deep Neural Network [J].
Bar, Yaniv ;
Levy, Noga ;
Wolf, Lior .
COMPUTER VISION - ECCV 2014 WORKSHOPS, PT I, 2015, 8925 :71-84
[2]  
Bouwmans T., 2014, Background Modeling and Foreground Detection for Video Surveillance
[3]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[4]   Visual Attention Accelerated Vehicle Detection in Low-Altitude Airborne Video of Urban Environment [J].
Cao, Xianbin ;
Lin, Renjun ;
Yan, Pingkun ;
Li, Xuelong .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2012, 22 (03) :366-378
[5]   Geolocation with Subsampled Microblog Social Media [J].
Cha, Miriam ;
Gwon, Youngjune ;
Kung, H. T. .
MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, :891-894
[6]   An evaluation of multimodal 2D+3D face biometrics [J].
Chang, KI ;
Bowyer, KW ;
Flynn, PJ .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2005, 27 (04) :619-624
[7]   Automatic Detection of Object-Based Forgery in Advanced Video [J].
Chen, Shengda ;
Tan, Shunquan ;
Li, Bin ;
Huang, Jiwu .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2016, 26 (11) :2138-2151
[8]   Open set source camera attribution and device linking [J].
Costa, Filipe de O. ;
Silva, Ewerton ;
Eckmann, Michael ;
Scheirer, Walter J. ;
Rocha, Anderson .
PATTERN RECOGNITION LETTERS, 2014, 39 :92-101
[9]  
Fathi A, 2012, LECT NOTES COMPUT SC, V7572, P314, DOI 10.1007/978-3-642-33718-5_23
[10]  
Fathi A, 2012, PROC CVPR IEEE, P1226, DOI 10.1109/CVPR.2012.6247805