Robust pavement crack segmentation network based on transformer and dual-branch decoder

被引:0
|
作者
Yu, Zhenwei [1 ,2 ]
Chen, Qinyu [3 ]
Shen, Yonggang [1 ,4 ]
Zhang, Yiping [1 ,4 ]
机构
[1] Zhejiang Univ, Coll Civil Engn & Architecture, Hangzhou, Peoples R China
[2] Zhejiang Univ, Balance Architecture, Hangzhou, Peoples R China
[3] Zhejiang Inst Commun Co Ltd, Hangzhou, Peoples R China
[4] Zhejiang Univ, Innovat Ctr Yangtze River Delta, Hangzhou, Peoples R China
关键词
Pavement crack; Transformer block; Crack segmentation; Computer vision; Feature extraction;
D O I
10.1016/j.conbuildmat.2024.139026
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
The application of deep learning techniques for semantic segmentation of crack images has become a significant research direction in road maintenance and safety. Despite the extensive research in recent years on semantic segmentation algorithms based on convolutional neural networks, their relatively small actual receptive fields cannot effectively handle long and fine pavement cracks. In contrast, transformer-based models can effectively utilize contextual semantic information. Therefore, a robust pavement crack segmentation network, CSTF, is proposed based on the Swin Transformer encoder. Within CSTF, a feature pyramid pooling module is introduced to provide global priors, and a dual-branch decoder is designed to preserve and learn semantic information, enabling CSTF to handle large-scale images and wide-spanning cracks. The results demonstrate that CSTF achieved an mIoU of 0.813 and 22.97 FPS on the large-scale dataset constructed in this study, enabling highprecision real-time detection. Moreover, it exhibits robustness against common interfering patterns like striped patches or other disturbances found in pavement crack images.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Dual-branch feature extraction network combined with Transformer and CNN for polyp segmentation
    Liu, Qiaohong
    Lin, Yuanjie
    Han, Xiaoxiang
    Chen, Keyan
    Zhang, Weikun
    Yang, Hui
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2024, 34 (01)
  • [2] TrUNet: Dual-Branch Network by Fusing CNN and Transformer for Skin Lesion Segmentation
    Chen, Wei
    Mu, Qian
    Qi, Jie
    IEEE ACCESS, 2024, 12 : 144174 - 144185
  • [3] Parallel Dual-Branch Polyp Segmentation Network
    Sun, Kunjie
    Cheng, Li
    Yuan, Haiwen
    Li, Xuan
    IEEE ACCESS, 2024, 12 : 192051 - 192061
  • [4] Dual-Branch Network for Cloud and Cloud Shadow Segmentation
    Lu, Chen
    Xia, Min
    Qian, Ming
    Chen, Binyu
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [5] A Dual-Branch Fusion Network for Surgical Instrument Segmentation
    Yang, Lei
    Zhai, Chenxu
    Wang, Hongyong
    Liu, Yanhong
    Bian, Guibin
    IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2024, 6 (04): : 1542 - 1554
  • [6] A Dual-Branch Multiscale Transformer Network for Hyperspectral Image Classification
    Shi, Cuiping
    Yue, Shuheng
    Wang, Liguo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 20
  • [7] DFTI: Dual-Branch Fusion Network Based on Transformer and Inception for Space Noncooperative Objects
    Zhang, Zhao
    Zhou, Dong
    Sun, Guanghui
    Hu, YuHui
    Deng, Runran
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [8] DBMA-Net: A Dual-Branch Multiattention Network for Polyp Segmentation
    Zhai, Chenxu
    Yang, Lei
    Liu, Yanhong
    Yu, Hongnian
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 16
  • [9] Dual-Branch Transformer Network for Enhancing LiDAR-Based Traversability Analysis in Autonomous Vehicles
    Shao, Shiliang
    Shi, Xianyu
    Han, Guangjie
    Wang, Ting
    Song, Chunhe
    Zhang, Qi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025, 26 (02) : 2582 - 2595
  • [10] Ship Recognition for Complex SAR Images via Dual-Branch Transformer Fusion Network
    Sun, Zhongzhen
    Leng, Xiangguang
    Zhang, Xianghui
    Xiong, Boli
    Ji, Kefeng
    Kuang, Gangyao
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21