ME-Seg&DLS-Net: A Dataset and a Network for Autonomous Driving Based on Multi-Element Semantic Segmentation of Pavement

被引:0
|
作者
Wang, Hai [1 ]
Zhang, Guirong [1 ]
Chen, Long [2 ]
Li, Yicheng [2 ]
Luo, Tong [2 ,3 ]
Cai, Yingfeng [2 ]
机构
[1] Jiangsu Univ, Sch Automot & Traff Engn, Zhenjiang 212013, Peoples R China
[2] Jiangsu Univ, Automot Engn Res Inst, Zhenjiang 212013, Peoples R China
[3] Jiangsu Univ Technol, Sch Automobile & Traff Engn, Changzhou 213001, Peoples R China
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2024年
基金
中国国家自然科学基金;
关键词
Roads; Semantic segmentation; Real-time systems; Feature extraction; Semantics; Task analysis; Accuracy; Autonomous driving; deep learning; semantic segmentation; multi-element segmentation;
D O I
10.1109/TETCI.2024.3423437
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The segmentation of multiple elements of the road pavement is an essential task of the autonomous driving perception system, particularly for the accurate recognition of drivable areas, lane lines, and road signs. Currently, publicly available datasets for research in this field are lacking, and state-of-the-art algorithm models can only identify individual pavement elements with specificity. Based on this, we constructed the ME-Seg dataset for pavement multi-element semantic segmentation, which comprises three training labels: drivable areas, lane lines, and road traffic signs. This dataset captures various traffic, lighting, and weather conditions. Next, we propose a real-time semantic segmentation algorithm model for autonomous driving on road surfaces called DLS-Net based on visual sensors. This network adopts a dual-branch network structure and can accurately identify drivable areas, lane lines, and road traffic signs simultaneously. Within the network, we designed a Multi-Scale Fusion Module (MSFM) to expand the receptive field and include large objects within the field of view. Additionally, we designed a Cross-Guided Aggregation Module (CGAM) to provide guidance information for deep and shallow feature maps while compensating for information loss in small objects. The experimental results based on ME-Seg demonstrate that the mIoU of the proposed network reaches 75.61% at an inference speed of 49.5 fps. The results of real-world vehicle experiments demonstrate that the model exhibits good performance and robustness in actual autonomous driving scenarios.
引用
收藏
页码:1539 / 1555
页数:17
相关论文
共 6 条
  • [1] S-Net: A Lightweight Real-Time Semantic Segmentation Network for Autonomous Driving
    Mazhar, Saquib
    Atif, Nadeem
    Bhuyan, M. K.
    Ahamed, Shaik Rafi
    COMPUTER VISION AND IMAGE PROCESSING, CVIP 2023, PT II, 2024, 2010 : 147 - 159
  • [2] Image Semantic Segmentation for Autonomous Driving Based on Improved U-Net
    Sun, Chuanlong
    Zhao, Hong
    Mu, Liang
    Xu, Fuliang
    Lu, Laiwei
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (01): : 787 - 801
  • [3] Semantic segmentation of autonomous driving scenes based on multi-scale adaptive attention mechanism
    Liu, Danping
    Zhang, Dong
    Wang, Lei
    Wang, Jun
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [4] MLFNet: Multi-Level Fusion Network for Real-Time Semantic Segmentation of Autonomous Driving
    Fan, Jiaqi
    Wang, Fei
    Chu, Hongqing
    Hu, Xiao
    Cheng, Yifan
    Gao, Bingzhao
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (01): : 756 - 767
  • [5] LDMSNet: Lightweight Dual-Branch Multi-Scale Network for Real-Time Semantic Segmentation of Autonomous Driving
    Yang, Haoran
    Zhang, Dan
    Liu, Jiazai
    Cao, Zekun
    Wang, Na
    INTERNATIONAL JOURNAL OF AUTOMOTIVE TECHNOLOGY, 2024, : 577 - 591
  • [6] Multi-class lane marking segmentation dataset for vision-based environmental perception in autonomous driving
    Rajalakshmi, Thozhuvur Suchindra Babu
    Senthilnathan, Ranganathan
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART D-JOURNAL OF AUTOMOBILE ENGINEERING, 2025, : 1834 - 1847