Acoustic Non-Line-of-Sight Vehicle Approaching and Leaving Detection

被引:3
作者
Hao, Mingyang [1 ]
Ning, Fangli [1 ]
Wang, Ke [1 ]
Duan, Shaodong [1 ]
Wang, Zhongshan [1 ]
Meng, Di [1 ]
Xie, Penghao [1 ]
机构
[1] Northwestern Polytech Univ, Sch Mech Engn, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
Acoustics; Feature extraction; Vehicle detection; Nonlinear optics; Task analysis; Microphone arrays; Location awareness; Acoustic traffic perception; sound event detection; intelligent vehicle; deep learning; non-line-of-sight;
D O I
10.1109/TITS.2024.3353749
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Early detection of occluded moving vehicles at intersections can prevent collisions, but most of the existing sensors only detect objects in sight. Sound propagates around obstacles by means such as reflection and diffraction, allowing passive acoustic sensing of the occluded vehicle. We propose a deep learning-based acoustic non-line-of-sight (NLOS) vehicle detection method. With the direction-of-arrival feature and time-frequency feature calculated from the microphone array data as inputs in image form, we designed and trained the parallel neural network to perceive the direction of the occluded moving vehicle at intersections. Since intelligent vehicles react differently to the approaching and leaving occluded moving vehicle, we further distinguished the occluded moving vehicle's approaching/leaving status. To evaluate the proposed method, we collected data from different locations in the urban environment. The experimental results show that the classification for 6-class intersection traffic conditions reached 96.71%, and the occluded approaching vehicle was detected 1 second before it entered the line of sight, providing additional reaction time for intelligent vehicles. The direction of the occluded moving vehicle is accurately predicted, and the approaching/leaving status is further inferred, providing detailed traffic information for the intelligent vehicles' response decisions. Furthermore, experiments show that the predictions of our method outperform the state-of-the-art acoustic NLOS approach vehicle detection baseline on real-world traffic datasets. Our code and dataset: https://github.com/RST2detection/Acoustic-Occluded-Vehicle-Detection.
引用
收藏
页码:9979 / 9991
页数:13
相关论文
共 40 条
[1]   Sound Event Localization and Detection of Overlapping Sources Using Convolutional Recurrent Neural Networks [J].
Adavanne, Sharath ;
Politis, Archontis ;
Nikunen, Joonas ;
Virtanen, Tuomas .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2019, 13 (01) :34-48
[2]   Diffraction- and Reflection-Aware Multiple Sound Source Localization [J].
An, Inkyu ;
Kwon, Youngsun ;
Yoon, Sung-eui .
IEEE TRANSACTIONS ON ROBOTICS, 2022, 38 (03) :1925-1944
[3]  
Asahi K, 2011, IEEE INT VEH SYM, P119, DOI 10.1109/IVS.2011.5940423
[4]  
Bae S.H., 2016, DCASE, P11
[5]   Adaptive eigenvalue decomposition algorithm for passive acoustic source localization [J].
Benesty, J .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2000, 107 (01) :384-391
[6]   Classifying environmental sounds using image recognition networks [J].
Boddapati, Venkatesh ;
Petef, Andrej ;
Rasmusson, Jim ;
Lundberg, Lars .
KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS, 2017, 112 :2048-2056
[7]  
Bulatovic N., 2022, PROC 26 INT C INF TE, P1
[8]   Convolutional Recurrent Neural Networks for Polyphonic Sound Event Detection [J].
Cakir, Emre ;
Parascandolo, Giambattista ;
Heittola, Toni ;
Huttunen, Heikki ;
Virtanen, Tuomas .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2017, 25 (06) :1291-1303
[9]   Sensing system of environmental perception technologies for driverless vehicle: A review of state of the art and challenges [J].
Chen, Qiping ;
Xie, Yinfei ;
Guo, Shifeng ;
Bai, Jie ;
Shu, Qiang .
SENSORS AND ACTUATORS A-PHYSICAL, 2021, 319
[10]   Millimeter-Wave Vehicular Communication to Support Massive Automotive Sensing [J].
Choi, Junil ;
Va, Vutha ;
Gonzalez-Prelcic, Nuria ;
Daniels, Robert ;
Bhat, Chandra R. ;
Heath, Robert W., Jr. .
IEEE COMMUNICATIONS MAGAZINE, 2016, 54 (12) :160-167