BoostedDim attention: A novel data-driven approach to improving LiDAR-based lane detection

被引:1
作者
Patil, Omkar [1 ]
Nair, Binoy B. [1 ]
Soni, Rajat [2 ]
Thayyilravi, Arunkrishna [2 ]
Manoj, C. R. [2 ]
机构
[1] Amrita Vishwa Vidyapeetham, Amrita Sch Engn, Dept Elect & Commun Engn, Coimbatore, India
[2] Tata Consultancy Serv, Bangalore, India
关键词
ADAS; Computer Vision; K-Lane; Lane Detection; LiDAR; Multi-Head Attention; Self-Attention; Vision Transformer;
D O I
10.1016/j.asej.2024.102887
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Lane detection is a fundamental component of advanced driver assistance systems, facilitating critical functionalities like Lane Keep/Change Assistance, Lane Departure Warning, Adaptive Cruise Control, and Vehicle Localization. Despite significant advancements in camera-based lane detection, it continues to confront challenges that can be effectively addressed with LiDAR technology. This research contributes to the domain of LiDAR-based lane detection across three pivotal areas. Firstly, we introduce the BoostedDim Attention method, enhancing traditional Multi-Head Self-Attention (MHA) calculations within the shallow Vision Transformersbased K-Lane baseline model. This method excels, particularly in demanding scenarios, including unknown and nighttime conditions at short ranges (0-30 m) and daytime scenarios for long ranges (30-50 m). Secondly, we devise distance-based True Positive Rate (TPR) and Lateral Error evaluation metrics, providing a more precise and tailored approach to evaluating model performance compared to conventional metrics. These metrics consider sensor-specific and task-specific factors, offering a comprehensive assessment of LiDAR-based lane detection capabilities. Lastly, our investigation sheds light on the significance of calibrated reflectivity and intensity data, revealing their impact on lane detection under various lighting conditions. Notably, we highlight the positive influence of intensity data in low-light conditions for short ranges and its adverse effect during daytime for long ranges. These findings have significant implications for enhancing autonomous driving applications and other computer vision tasks.
引用
收藏
页数:21
相关论文
共 68 条
[1]  
Aravind H., 2020, 2020 11 INT C COMP C, P1
[2]  
AVE Lab, 2023, 2021 science and technology publication and citation
[3]  
Bai M, 2018, IEEE INT C INT ROBOT, P3102, DOI 10.1109/IROS.2018.8594388
[4]  
Beal Josh, 2020, arXiv
[5]   Deep Learning-based 3D Object Detection Using LiDAR and Image Data Fusion [J].
Bharadhwaj, Bizzam Murali ;
Nair, Binoy B. .
2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
[6]  
Brown TB, 2020, ADV NEUR IN, V33
[7]  
Cao Hu, 2023, Computer Vision - ECCV 2022 Workshops: Proceedings. Lecture Notes in Computer Science (13803), P205, DOI 10.1007/978-3-031-25066-8_9
[8]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[9]   GLiT: Neural Architecture Search for Global and Local Image Transformer [J].
Chen, Boyu ;
Li, Peixia ;
Li, Chuming ;
Li, Baopu ;
Bai, Lei ;
Lin, Chen ;
Sun, Ming ;
Yan, Junjie ;
Ouyang, Wanli .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :12-21
[10]  
Chen Chun-Fu., 2021, arXiv