A Framework for Object Classification via Camera-Radar Fusion with Automated Labeling

被引:0
作者
Samuktha, V [1 ]
Abhilash, S. [1 ]
Kumar, Nitish [2 ]
Rajalakshmi, P. [2 ]
机构
[1] IIT Hyderabad, Dept Artificial Intelligence, Hyderabad, India
[2] IIT Hyderabad, Dept Elect Engn, Hyderabad, India
来源
2024 IEEE SENSORS APPLICATIONS SYMPOSIUM, SAS 2024 | 2024年
关键词
Annotation; Radar; camera; fusion; multimodal datasets; deep learning;
D O I
10.1109/SAS60918.2024.10636564
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
This paper introduces a robust deep learning-based framework designed to annotate Radar and camera datasets with the goal of enhancing classification and object detection performance in intricate driving environments. The realm of autonomous driving frequently encounters obstacles when it comes to interpreting complex scenes, as relying solely on camera data may not suffice to capture the full scope of information. In many cases, Radar sensors are deployed alongside cameras in commercial vehicles, providing additional and complementary data streams that can enrich the understanding of the environment. The framework put forth in this study makes use of deep learning approach to annotate a radar data that lacks elevation information, also has proposed a new neural architecture to annotate Radar and camera data, thereby facilitating precise classification tasks and ultimately improving the performance of autonomous systems. Moreover the annotated files thus generated have been tested with a newly proposed mid level fusion architecture. An evaluation conducted on a benchmark dataset showcases an impressive overall classification accuracy rate of 99%, underscoring the efficacy of the proposed framework. Moreover, testing carried out on a dataset gathered on-site yielded a commendable overall accuracy of 90%, further validating the framework's robustness and real-world applicability. Through the utilization of this framework, researchers are empowered to leverage the combined strengths of Radar and camera fusion, thereby bolstering the capabilities of autonomous systems across a wide array of driving scenarios.
引用
收藏
页数:6
相关论文
共 22 条
[1]   Pipeline for Automation of LiDAR Data Annotation [J].
Anand, Bhaskar ;
Rajalakshmi, P. .
2023 IEEE SENSORS APPLICATIONS SYMPOSIUM, SAS, 2023,
[2]   Robust Detection and Tracking Method for Moving Object Based on Radar and Camera Data Fusion [J].
Bai, Jie ;
Li, Sen ;
Huang, Libo ;
Chen, Huanlei .
IEEE SENSORS JOURNAL, 2021, 21 (09) :10761-10774
[3]  
Belfiore F, 2017, EUROP RADAR CONF, P143, DOI 10.23919/EURAD.2017.8249167
[4]   nuScenes: A multimodal dataset for autonomous driving [J].
Caesar, Holger ;
Bankiti, Varun ;
Lang, Alex H. ;
Vora, Sourabh ;
Liong, Venice Erin ;
Xu, Qiang ;
Krishnan, Anush ;
Pan, Yu ;
Baldan, Giancarlo ;
Beijbom, Oscar .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11618-11628
[5]   ERASE-Net: Efficient Segmentation Networks for Automotive Radar Signals [J].
Fang, Shihong ;
Zhu, Haoran ;
Bisla, Devansh ;
Choromanska, Anna ;
Ravindran, Satish ;
Ren, Dongyin ;
Wu, Ryan .
2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, :9331-9337
[6]  
Isele S. T., 2020, C NEUR INF PROC SYST, V47
[7]   3D Multi-Object Tracking Based on Radar-Camera Fusion [J].
Lin, Zihao ;
Hu, Jianming .
2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, :2502-2507
[8]   Automatic Label Creation Framework for FMCW Radar Images Using Camera Data [J].
Mendez, Javier ;
Schoenfeldt, Stephan ;
Tang, Xinyi ;
Valtl, Jakob ;
Cuellar, M. P. ;
Morales, Diego P. .
IEEE ACCESS, 2021, 9 :83329-83339
[9]   A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection [J].
Nobis, Felix ;
Geisslinger, Maximilian ;
Weber, Markus ;
Betz, Johannes ;
Lienkamp, Markus .
2019 SYMPOSIUM ON SENSOR DATA FUSION: TRENDS, SOLUTIONS, APPLICATIONS (SDF 2019), 2019,
[10]   CARRADA Dataset: Camera and Automotive Radar with Range-Angle-Doppler Annotations [J].
Ouaknine, Arthur ;
Newson, Alasdair ;
Rebut, Julien ;
Tupin, Florence ;
Perez, Patrick .
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, :5068-5075