A robust and real-time lane detection method in low-light scenarios to advanced driver assistance systems

被引:2
|
作者
Zhang, Ronghui [1 ]
Peng, Jingtao [1 ]
Gou, Wanting [1 ]
Ma, Yuhang [1 ]
Chen, Junzhou [1 ,3 ]
Hu, Hongyu [2 ]
Li, Weihua [4 ]
Yin, Guodong [5 ]
Li, Zhiwu [6 ]
机构
[1] Sun Yat Sen Univ, Sch Intelligent Syst Engn, Guangdong Key Lab Intelligent Transportat Syst, Guangzhou 510275, Peoples R China
[2] Jilin Univ, State Key Lab Automot Simulat & Control, Changchun 130022, Peoples R China
[3] Univ Durham, Dept Engn, Durham DH1 3LE, England
[4] South China Univ Technol, Sch Mech & Automot Engn, Guangzhou 510640, Peoples R China
[5] Southeast Univ, Sch Mech Engn, Nanjing 211189, Peoples R China
[6] Macau Univ Sci & Technol, Inst Syst Engn, Taipa 999078, Macao, Peoples R China
基金
中国国家自然科学基金;
关键词
ADAS; Lane detection; Real-time; Low-light scenarios; Low-light lane detection datasets; Embedded instrumentation system; HISTOGRAM EQUALIZATION; ENHANCEMENT; RETINEX;
D O I
10.1016/j.eswa.2024.124923
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Lane detection, which relies on front-view RGB cameras, is a crucial aspect of Advanced Driver Assistance Systems (ADAS), but its effectiveness is notably reduced in low-light conditions. This issue is exacerbated by the lack of specialized datasets and generalizable methods for such scenarios. To address this gap, we introduce NightLane, a comprehensive dataset tailored for low-light, multi-traffic lane detection. We adhere to stringent data annotation guidelines, ensuring reliable detection accuracy. Additionally, we propose the Fused Low-Light Enhancement Framework (FLLENet), which leverages modern detection networks and incorporates a low-light enhancement module and attention mechanisms. The enhancement module, based on zero-reference learning, improves image quality and channel richness, while the attention mechanisms effectively extract and utilize these features. Our extensive testing on NightLane and CULane datasets demonstrates superior performance in low-light lane detection, showcasing FLLENet's robust generalizability and efficacy. Specifically, our approach achieves an F1 measure of 76.90 on CULane and 78.91 on NightLane, demonstrating its effectiveness against state-of-the-art methods. We also evaluate the real-time applicability of our framework on a low-power embedded lane detection system using NVIDIA Jetson AGX/Orin, achieving high accuracy and real-time performance. Our work offers a new approach and reference in the field of low-light lane detection, potentially aiding in the ongoing enhancement of ADAS (ADAS). Dateset are available at https: //github.com/pengjingt/FLLENet.
引用
收藏
页数:21
相关论文
共 32 条
  • [11] Assessing YOLO models for real-time object detection in urban environments for advanced driver-assistance systems (ADAS)
    Ayachi, Riadh
    Said, Yahia
    Afif, Mouna
    Alshammari, Aadil
    Hleili, Manel
    Ben Abdelali, Abdessalem
    ALEXANDRIA ENGINEERING JOURNAL, 2025, 123 : 530 - 549
  • [12] Sharpness-aware Real-time Haze Removal Algorithm for Advanced Driver Assistance Systems
    Ahn, Joonggeun
    Kim, Jihoon
    Lee, Youngjoo
    JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, 2017, 17 (06) : 765 - 770
  • [13] Lane Line Detection in Real Time Based on Morphological Operations for Driver Assistance System
    Kodeeswari, M.
    Daniel, Philemon
    PROCEEDINGS OF 4TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, COMPUTING AND CONTROL (ISPCC 2K17), 2017, : 316 - 320
  • [14] A Real-Time Driver Assistance System Using Object Detection and Tracking
    Murthy, Jamuna S.
    Chitlapalli, Sanjeeva S.
    Anirudha, U. N.
    Subramanya, Varsha
    ADVANCES IN COMPUTING AND DATA SCIENCES (ICACDS 2022), PT II, 2022, 1614 : 150 - 159
  • [15] A real-time robust lane detection approach for autonomous vehicle environment
    Wu, BF
    Chen, CJ
    Chiu, CC
    Lai, TC
    Proceedings of the Sixth IASTED International Conference on Signal and Image Processing, 2004, : 518 - 523
  • [16] Real-Time Low-Light Imaging in Space Based on the Fusion of Spatial and Frequency Domains
    Wu, Jiaxin
    Zhang, Haifeng
    Li, Biao
    Duan, Jiaxin
    Li, Qianxi
    He, Zeyu
    Cao, Jianzhong
    Wang, Hao
    ELECTRONICS, 2023, 12 (24)
  • [17] A real-time semantic segmentation method for end-to-end autonomous driving in low-light environments
    Liu, Yang
    Yi, Fulong
    Ma, Yuhua
    Wang, Yongfu
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 45 (06) : 9223 - 9237
  • [18] Real-time image fusion system for low-light color night vision
    Wang, SX
    Gao, ZY
    Jin, WQ
    Hou, SF
    ELECTRONIC IMAGING AND MULTIMEDIA TECHNOLOGY III, 2002, 4925 : 534 - 538
  • [19] ASA-BiSeNet: improved real-time approach for road lane semantic segmentation of low-light autonomous driving road scenes
    Liu, Yang
    Yi, Fulong
    Ma, Yuhua
    Wang, Yongfu
    APPLIED OPTICS, 2023, 62 (19) : 5224 - 5235
  • [20] Parallel Hough Space Image Generation Method for Real-Time Lane Detection
    Kim, Hee-Soo
    Beak, Seung-Hae
    Park, Soon-Yong
    ADVANCED CONCEPTS FOR INTELLIGENT VISION SYSTEMS, ACIVS 2016, 2016, 10016 : 81 - 91