A serial semantic segmentation model based on encoder-decoder architecture

被引:2
作者
Zhou, Yan [1 ,2 ]
机构
[1] Zhejiang Univ, Ocean Coll, Zhoushan 316021, Peoples R China
[2] Minist Nat Resources, Inst Oceanog 2, State Key Lab Satellite Ocean Environm Dynam, Hangzhou 310012, Peoples R China
关键词
Semantic segmentation; Lawin transformer; CNN; Encoder-decoder; Attention mechanism; DEEP CONVOLUTIONAL NETWORKS; FUSION NETWORK;
D O I
10.1016/j.knosys.2024.111819
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The thriving progress of Convolutional Neural Networks (CNNs) and the outstanding efficacy of Visual Transformers (ViTs) have delivered impressive outcomes in the domain of semantic segmentation. However, each model in isolation entails a trade-off between high computational complexity and compromised computational efficiency. To address this challenge, we effectively combine the CNN and encoder-decoder structures in a Transformer-inspired fashion, presenting the Serial Semantic Segmentation Trans via CNN Former (SSS-Former) model. To augment the feature extraction capability, we utilize the meticulously crafted SSS-CSPNet, resulting in a well-designed architecture for the holistic model. We propose a novel SSS-PN attention network that enhances the spatial topological connections of features, leading to improved overall performance. Additionally, the integration of SASPP bridges the semantic gap between multi-scale features and enhances segmentation ability for overlapping objects. To fulfill the requirement of real-time segmentation, we leverage a novel restructuring technique to devise a more lightweight and faster ResSSS-Former model. Abundant experimental results demonstrate that both SSS-Former and ResSSS-Former outperform existing state-of-the-art methods in terms of computational efficiency, result precision, and speed. Remarkably, SSS-Former achieves a mIoU of 58.63 % at 89.1FPS on the ADE20K dataset. On the validation and testing datasets of CityScapes, it obtains mIoU scores of 85.1 % and 85.2 % respectively, with a speed of 94.1FPS. Our optimized ResSSS-Former achieves impressive realtime segmentation results, with an astonishing 100+FPS while maintaining high segmentation accuracy. The compelling results from the ISPRS datasets further validate the effectiveness of our proposed models in segmenting multi-scale and overlapping objects.
引用
收藏
页数:18
相关论文
共 80 条
[61]   CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image Segmentation [J].
Xie, Yutong ;
Zhang, Jianpeng ;
Shen, Chunhua ;
Xia, Yong .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT III, 2021, 12903 :171-180
[62]  
Yan HT, 2023, Arxiv, DOI arXiv:2201.01615
[63]   Facial Micro-Expression Recognition Using Quaternion-Based Sparse Representation [J].
Yang, Hang ;
Wang, Qingshan ;
Wang, Qi ;
Liu, Peng ;
Huang, Wei .
2020 29TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2020), 2020,
[64]   Enriched CNN-Transformer Feature Aggregation Networks for Super-Resolution [J].
Yoo, Jinsu ;
Kim, Taehoon ;
Lee, Sihaeng ;
Kim, Seung Hwan ;
Lee, Honglak ;
Kim, Tae Hyun .
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, :4945-4954
[65]  
Yuhui Yuan, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12351), P173, DOI 10.1007/978-3-030-58539-6_11
[66]   Restormer: Efficient Transformer for High-Resolution Image Restoration [J].
Zamir, Syed Waqas ;
Arora, Aditya ;
Khan, Salman ;
Hayat, Munawar ;
Khan, Fahad Shahbaz ;
Yang, Ming-Hsuan .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :5718-5729
[67]   JS']JSH-Net: joint semantic segmentation and height estimation using deep convolutional networks from single high-resolution remote sensing imagery [J].
Zhang, Bin ;
Wan, Yi ;
Zhang, Yongjun ;
Li, Yansheng .
INTERNATIONAL JOURNAL OF REMOTE SENSING, 2022, 43 (17) :6307-6332
[68]  
Zhang D., 2020, COMPUTER VISION ECCV
[69]   CSART: Channel and spatial attention-guided residual learning for real-time object tracking [J].
Zhang, Dawei ;
Zheng, Zhonglong ;
Li, Minglu ;
Liu, Rixian .
NEUROCOMPUTING, 2021, 436 :260-272
[70]   Segmenting across places: The need for fair transfer learning with satellite imagery [J].
Zhang, Miao ;
Singh, Harvineet ;
Chok, Lazarus ;
Chunara, Rumi .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, :2915-2924