RELAXNet: Residual efficient learning and attention expected fusion network for real-time semantic segmentation

被引:35
|
作者
Liu, Jin [1 ]
Xu, Xiaoqing [1 ]
Shi, Yiqing [2 ]
Deng, Cheng [1 ]
Shi, Miaohua [1 ]
机构
[1] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
[2] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Peoples R China
关键词
Semantic segmentation; Real-time analysis; Attention mechanism;
D O I
10.1016/j.neucom.2021.12.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a dense prediction problem, semantic segmentation consumes extensive memory and computational resources. However, the application of semantic segmentation requires the model to perform real-time analyses in portable devices, thus it is crucial to seek a trade-off between segmentation accuracy and inference speed. In this paper, we propose a lightweight semantic segmentation method based on attention mechanism to address this problem. First, we use novel Efficient Bottleneck Residual (EBR) Module and Efficient Asymmetric Bottleneck Residual (EABR) Module to extract both local and contextual information,which adopt a well-designed combination of depth-wise convolution, dilated convolution and factorized convolution, with channel shuffle to boost information interaction. Second, we introduce attention mechanism into skip connection between the encoder and decoder to promote reasonable fusion of high-level and low-level features, which furtherly enhance the accuracy. With only 1.9 M parameters, our model obtains 74.8% mIoU and 64 FPS running speed on Cityscapes dataset and 71.2% mIoU and 79 FPS running speed on Camvid dataset. Experiments demonstrate that our model achieves competitive results in terms of segmentation accuracy and running speed while controlling parameters. (c) 2021 Published by Elsevier B.V.
引用
收藏
页码:115 / 127
页数:13
相关论文
共 50 条
  • [1] A hybrid attention multi-scale fusion network for real-time semantic segmentation
    Ye, Baofeng
    Xue, Renzheng
    Wu, Qianlong
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [2] A lightweight network with attention decoder for real-time semantic segmentation
    Wang, Kang
    Yang, Jinfu
    Yuan, Shuai
    Li, Mingai
    VISUAL COMPUTER, 2022, 38 (07) : 2329 - 2339
  • [3] A lightweight network with attention decoder for real-time semantic segmentation
    Kang Wang
    Jinfu Yang
    Shuai Yuan
    Mingai Li
    The Visual Computer, 2022, 38 : 2329 - 2339
  • [4] EANET: EFFICIENT ATTENTION-AUGMENTED NETWORK FOR REAL-TIME SEMANTIC SEGMENTATION
    Dong, Jianan
    Guo, Jichang
    Yue, Huihui
    Gao, Huan
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3968 - 3972
  • [5] Spatial-Semantic Fusion Network for Semantic Segmentation in Real-time
    Fang Yu
    Zhang Xuehe
    Zhang He
    Liu Gangfeng
    Li Changle
    Zhao Jie
    2019 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2019, : 30 - 35
  • [6] Real-Time Semantic Segmentation Network Based on Regional Self-Attention
    Bao Hailong
    Wan Min
    Liu Zhongxian
    Qin Mian
    Cui Haoyu
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (08)
  • [7] ULAF-Net: Ultra lightweight attention fusion network for real-time semantic segmentation
    Hu, Kaidi
    Xie, Zongxia
    Hu, Qinghua
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (07) : 2987 - 3003
  • [8] ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation
    Romera, Eduardo
    Alvarez, Jose M.
    Bergasa, Luis M.
    Arroyo, Roberto
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2018, 19 (01) : 263 - 272
  • [9] Real-Time Semantic Segmentation With Fast Attention
    Hu, Ping
    Perazzi, Federico
    Heilbron, Fabian Caba
    Wang, Oliver
    Lin, Zhe
    Saenko, Kate
    Sclaroff, Stan
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (01) : 263 - 270
  • [10] Lightweight and efficient asymmetric network design for real-time semantic segmentation
    Xiu-Ling Zhang
    Bing-Ce Du
    Zhao-Ci Luo
    Kai Ma
    Applied Intelligence, 2022, 52 : 564 - 579