A serial semantic segmentation model based on encoder-decoder architecture

被引:2
作者
Zhou, Yan [1 ,2 ]
机构
[1] Zhejiang Univ, Ocean Coll, Zhoushan 316021, Peoples R China
[2] Minist Nat Resources, Inst Oceanog 2, State Key Lab Satellite Ocean Environm Dynam, Hangzhou 310012, Peoples R China
关键词
Semantic segmentation; Lawin transformer; CNN; Encoder-decoder; Attention mechanism; DEEP CONVOLUTIONAL NETWORKS; FUSION NETWORK;
D O I
10.1016/j.knosys.2024.111819
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The thriving progress of Convolutional Neural Networks (CNNs) and the outstanding efficacy of Visual Transformers (ViTs) have delivered impressive outcomes in the domain of semantic segmentation. However, each model in isolation entails a trade-off between high computational complexity and compromised computational efficiency. To address this challenge, we effectively combine the CNN and encoder-decoder structures in a Transformer-inspired fashion, presenting the Serial Semantic Segmentation Trans via CNN Former (SSS-Former) model. To augment the feature extraction capability, we utilize the meticulously crafted SSS-CSPNet, resulting in a well-designed architecture for the holistic model. We propose a novel SSS-PN attention network that enhances the spatial topological connections of features, leading to improved overall performance. Additionally, the integration of SASPP bridges the semantic gap between multi-scale features and enhances segmentation ability for overlapping objects. To fulfill the requirement of real-time segmentation, we leverage a novel restructuring technique to devise a more lightweight and faster ResSSS-Former model. Abundant experimental results demonstrate that both SSS-Former and ResSSS-Former outperform existing state-of-the-art methods in terms of computational efficiency, result precision, and speed. Remarkably, SSS-Former achieves a mIoU of 58.63 % at 89.1FPS on the ADE20K dataset. On the validation and testing datasets of CityScapes, it obtains mIoU scores of 85.1 % and 85.2 % respectively, with a speed of 94.1FPS. Our optimized ResSSS-Former achieves impressive realtime segmentation results, with an astonishing 100+FPS while maintaining high segmentation accuracy. The compelling results from the ISPRS datasets further validate the effectiveness of our proposed models in segmenting multi-scale and overlapping objects.
引用
收藏
页数:18
相关论文
共 80 条
[51]  
Vaswani A, 2017, ADV NEUR IN, V30
[52]  
Wang C. Y., 2022, arXiv, DOI DOI 10.48550/ARXIV.2211.04800
[53]   Distill Knowledge from NRSfM for Weakly Supervised 3D Pose Learning [J].
Wang, Chaoyang ;
Kong, Chen ;
Lucey, Simon .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :743-752
[54]   YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors [J].
Wang, Chien-Yao ;
Bochkovskiy, Alexey ;
Liao, Hong-Yuan Mark .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, :7464-7475
[55]   CSPNet: A New Backbone that can Enhance Learning Capability of CNN [J].
Wang, Chien-Yao ;
Liao, Hong-Yuan Mark ;
Wu, Yueh-Hua ;
Chen, Ping-Yang ;
Hsieh, Jun-Wei ;
Yeh, I-Hau .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :1571-1580
[56]   Non-local Neural Networks [J].
Wang, Xiaolong ;
Girshick, Ross ;
Gupta, Abhinav ;
He, Kaiming .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :7794-7803
[57]   Hidden Feature-Guided Semantic Segmentation Network for Remote Sensing Images [J].
Wang, Zhen ;
Zhang, Shanwen ;
Zhang, Chuanlei ;
Wang, Buhong .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
[58]   CBAM: Convolutional Block Attention Module [J].
Woo, Sanghyun ;
Park, Jongchan ;
Lee, Joon-Young ;
Kweon, In So .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :3-19
[59]   Deep Bilateral Filtering Network for Point-Supervised Semantic Segmentation in Remote Sensing Images [J].
Wu, Linshan ;
Fang, Leyuan ;
Yue, Jun ;
Zhang, Bob ;
Ghamisi, Pedram ;
He, Min .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :7419-7434
[60]  
Xie EZ, 2021, ADV NEUR IN, V34