A Deep Cross-Modal Fusion Network for Road Extraction With High-Resolution Imagery and LiDAR Data

被引:6
作者
Luo, Hui [1 ]
Wang, Zijing [1 ]
Du, Bo [2 ,3 ]
Dong, Yanni [4 ]
机构
[1] China Univ Geosci, Sch Comp Sci, Wuhan 430079, Peoples R China
[2] Wuhan Univ, Inst Artificial Intelligence, Natl Engn Res Ctr Multimedia Software, Sch Comp Sci, Wuhan 430072, Peoples R China
[3] Wuhan Univ, Hubei Key Lab Multimedia & Network Commun Engn, Wuhan 430072, Peoples R China
[4] Wuhan Univ, Sch Resource & Environm Sci, Wuhan 430079, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2024年 / 62卷
关键词
Convolutional neural network; cross-modal feature fusion (CMFF); high-resolution remote sensing image; LiDAR data; road extraction; SEMANTIC SEGMENTATION; INFORMATION; MULTISCALE;
D O I
10.1109/TGRS.2024.3360963
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Urban road extraction is important for the applications of urban planning and transportation. High-resolution image (HRI) has been one of the most popular data sources for extracting roads with high efficiency and low cost. However, roads in HRI are easily obscured by buildings, trees, and other landscapes, resulting in discontinuity of the extracted roads. While current road extraction techniques by multimodal data fusion have shown improved results compared to single-modal methods by incorporating additional information, most existing fusion methods fail to fully exploit the features from different modalities and consider prior knowledge of roads. To address the above problems, a dual encoder-based cross-modal complementary fusion network (DECCFNet) is proposed in this article. The proposed network takes full advantage of the rich feature information contained in HRI and the immunity of LiDAR data to the influence of shadows. By effectively fusing the complementary information from HRI and LiDAR data, DECCFNet, respectively, achieved an improvement by at least 2.94% and 2.8% in IOU compared to those only using a single data modality on the two datasets. The proposed DECCFNet mainly contains two modules: 1) cross-modal feature fusion (CMFF) module: in the dual encoder part, CMFF is employed to fuse the deep features of different modalities from the channel and spatial dimension, while a multiscale fusion strategy is utilized to extract the contextual information; 2) multi-direction strip convolution (MDSC) module: since roads have the characteristics of narrowness and continuity, adopting classical convolution kernels directly on road features may introduce irrelevant pixels into the computation, blurring the extraction results. To mitigate this issue, MDSC is applied to strip the convolution of road features from multiple directions based on square convolution and make the network focus more on the specific road features. By comparing several deep-learning multimodal data fusion networks in the two road datasets, the proposed network exhibits the best road extraction results.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 62 条
  • [1] GIS-based sustainable city compactness assessment using integration of MCDM, Bayes theorem and RADAR technology
    Abdullahi, Saleh
    Pradhan, Biswajeet
    Jebur, Mustafa Neamah
    [J]. GEOCARTO INTERNATIONAL, 2015, 30 (04) : 365 - 387
  • [2] [Anonymous], 2004, ISPRS J PHOTOGRAMM
  • [3] Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks
    Audebert, Nicolas
    Le Saux, Bertrand
    Lefevre, Sebastien
    [J]. ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2018, 140 : 20 - 32
  • [4] End-to-End DSM Fusion Networks for Semantic Segmentation in High-Resolution Aerial Images
    Cao, Zhiying
    Fu, Kun
    Lu, Xiaode
    Diao, Wenhui
    Sun, Hao
    Yan, Menglong
    Yu, Hongfeng
    Sun, Xian
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2019, 16 (11) : 1766 - 1770
  • [5] Choudhury Ahana Roy, 2020, Procedia Computer Science, V176, P138, DOI 10.1016/j.procs.2020.08.015
  • [6] DiResNet: Direction-Aware Residual Network for Road Extraction in VHR Remote Sensing Images
    Ding, Lei
    Bruzzone, Lorenzo
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (12): : 10243 - 10254
  • [7] Fusion of hyperspectral images and lidar-based dems for coastal mapping
    Elaksher, Ahmed F.
    [J]. OPTICS AND LASERS IN ENGINEERING, 2008, 46 (07) : 493 - 498
  • [8] Progressive Adjacent-Layer coordination symmetric cascade network for semantic segmentation of Multimodal remote sensing images
    Fan, Xiaomin
    Zhou, Wujie
    Qian, Xiaohong
    Yan, Weiqing
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [9] Multisource Hyperspectral and LiDAR Data Fusion for Urban Land-Use Mapping based on a Modified Two-Branch Convolutional Neural Network
    Feng, Quanlong
    Zhu, Dehai
    Yang, Jianyu
    Li, Baoguo
    [J]. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2019, 8 (01)
  • [10] Integrating EfficientNet into an HAFNet Structure for Building Mapping in High-Resolution Optical Earth Observation Data
    Ferrari, Luca
    Dell'Acqua, Fabio
    Zhang, Peng
    Du, Peijun
    [J]. REMOTE SENSING, 2021, 13 (21)