Efficient Dual-Stream Fusion Network for Real-Time Railway Scene Understanding

被引:1
作者
Cao, Zhiwei [1 ,2 ,3 ]
Gao, Yang [1 ,2 ,3 ]
Bai, Jie [1 ,2 ,3 ]
Qin, Yong [1 ,2 ,3 ]
Zheng, Yuanjin [4 ]
Jia, Limin [1 ,2 ,3 ]
机构
[1] Beijing Jiaotong Univ, State Key Lab Adv Rail Autonomous Operat, Beijing 100044, Peoples R China
[2] Beijing Jiaotong Univ, Key Lab Railway Ind Proact Safety & Risk Control, Beijing 100044, Peoples R China
[3] Beijing Jiaotong Univ, Sch Traff & Transportat, Beijing 100044, Peoples R China
[4] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
关键词
Rail transportation; Feature extraction; Semantic segmentation; Semantics; Safety; Rails; Real-time systems; Railway scene understanding; semantic segmentation; dual-stream fusion network; intelligent railway; railway safety; autonomous driving;
D O I
10.1109/TITS.2024.3377187
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Railway scene understanding is key to autonomous train operation and important in active train perception. However, most railway scene understanding methods focus on track extraction and ignore other components of railway scenes. Although several semantic segmentation algorithms are used to identify railway scenes, they are computationally expensive and slow with limits applications in railways. To solve these problems, we propose efficient dual-stream fusion network (EDFNet), a lightweight semantic segmentation algorithm, for understanding railway scenes. First, a dual-stream backbone network based on mobile inverted residual blocks is proposed to extract and fuse detailed features and semantic features. Next, a bi-directional feature pyramid pooling module is proposed to obtain multi-scale features and deep semantic features. Finally, a multi-task aggregate loss is designed to learn semantic and boundary information, thus improving the accuracy without increasing the computational complexity. Extensive experimental results demonstrate that EDFNet outperforms the lightweight state-of-the-art algorithms with high accuracy and fast speed on two railway datasets.
引用
收藏
页码:9442 / 9452
页数:11
相关论文
共 57 条
  • [1] Railroad semantic segmentation on high-resolution images
    Belyaev, Sergey
    Popov, Igor
    Shubnikov, Vladislav
    Popov, Pavel
    Boltenkova, Ekaterina
    Savchuk, Daniil
    [J]. 2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [2] High-Level Semantic Networks for Multi-Scale Object Detection
    Cao, Jiale
    Pang, Yanwei
    Zhao, Shengjie
    Li, Xuelong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (10) : 3372 - 3386
  • [3] Cao Z., 2021, INT C EL INF TECHN R, P38
  • [4] Chen LC, 2017, Arxiv, DOI [arXiv:1706.05587, 10.48550/arXiv.1706.05587]
  • [5] MRSI: A multimodal proximity remote sensing data set for environment perception in rail transit
    Chen, Yihao
    Zhu, Ning
    Wu, Qian
    Wu, Cheng
    Niu, Weilong
    Wang, Yiming
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (09) : 5530 - 5556
  • [6] The Cityscapes Dataset for Semantic Urban Scene Understanding
    Cordts, Marius
    Omran, Mohamed
    Ramos, Sebastian
    Rehfeld, Timo
    Enzweiler, Markus
    Benenson, Rodrigo
    Franke, Uwe
    Roth, Stefan
    Schiele, Bernt
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 3213 - 3223
  • [7] Dosovitskiy A., 2021, PROC ICLR
  • [8] Rethinking BiSeNet For Real-time Semantic Segmentation
    Fan, Mingyuan
    Lai, Shenqi
    Huang, Junshi
    Wei, Xiaoming
    Chai, Zhenhua
    Luo, Junfeng
    Wei, Xiaolin
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 9711 - 9720
  • [9] Soft-Boundary Label Relaxation with class placement constraints for semantic segmentation of the railway environment
    Furitsu, Yuki
    Deguchi, Daisuke
    Kawanishi, Yasutomo
    Ide, Ichiro
    Murase, Hiroshi
    Mukojima, Hiroki
    Nagamine, Nozomi
    [J]. PATTERN RECOGNITION LETTERS, 2021, 150 : 258 - 264
  • [10] Howard AG, 2017, Arxiv, DOI [arXiv:1704.04861, DOI 10.48550/ARXIV.1704.04861]