RAANet: A Residual ASPP with Attention Framework for Semantic Segmentation of High-Resolution Remote Sensing Images

被引:77
作者
Liu, Runrui [1 ]
Tao, Fei [1 ,2 ]
Liu, Xintao [2 ]
Na, Jiaming [3 ]
Leng, Hongjun [1 ]
Wu, Junjie [1 ]
Zhou, Tong [1 ,4 ]
机构
[1] Nantong Univ, Sch Geog Sci, Nantong 226007, Peoples R China
[2] Hong Kong Polytech Univ, Dept Land Surveying & Geoinformat, Hong Kong, Peoples R China
[3] Nanjing Forestry Univ, Coll Civil Engn, Nanjing 210037, Peoples R China
[4] Jiangsu Yangtze River Econ Belt Res Inst, Nantong 226007, Peoples R China
基金
中国国家自然科学基金;
关键词
semantic segmentation; remote sensing; convolutional block attention module; dual attention module; residual structure; NETWORK;
D O I
10.3390/rs14133109
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Classification of land use and land cover from remote sensing images has been widely used in natural resources and urban information management. The variability and complex background of land use in high-resolution imagery poses greater challenges for remote sensing semantic segmentation. To obtain multi-scale semantic information and improve the classification accuracy of land-use types in remote sensing images, the deep learning models have been wildly focused on. Inspired by the idea of the atrous-spatial pyramid pooling (ASPP) framework, an improved deep learning model named RAANet (Residual ASPP with Attention Net) is constructed in this paper, which constructed a new residual ASPP by embedding the attention module and residual structure into the ASPP. There are 5 dilated attention convolution units and a residual unit in its encoder. The former is used to obtain important semantic information at more scales, and residual units are used to reduce the complexity of the network to prevent the disappearance of gradients. In practical applications, according to the characteristics of the data set, the attention unit can select different attention modules such as the convolutional block attention model (CBAM). The experimental results obtained from the land-cover domain adaptive semantic segmentation (LoveDA) and ISPRS Vaihingen datasets showed that this model can enhance the classification accuracy of semantic segmentation compared to the current deep learning models.
引用
收藏
页数:18
相关论文
共 34 条
  • [1] VNet: An End-to-End Fully Convolutional Neural Network for Road Extraction From High-Resolution Remote Sensing Data
    Abdollahi, Abolfazl
    Pradhan, Biswajeet
    Alamri, Abdullah
    [J]. IEEE ACCESS, 2020, 8 : 179424 - 179436
  • [2] Comparative Research on Deep Learning Approaches for Airplane Detection from Very High-Resolution Satellite Images
    Alganci, Ugur
    Soydas, Mehmet
    Sertel, Elif
    [J]. REMOTE SENSING, 2020, 12 (03)
  • [3] [Anonymous], 2020, INDIAN J SCI TECHNOL, V13, P1619, DOI DOI 10.17485/IJST/v13i16.271
  • [4] Asad Muhammad Hamza, 2020, Information Processing in Agriculture, V7, P535, DOI 10.1016/j.inpa.2019.12.002
  • [5] Deep semantic segmentation of natural and medical images: a review
    Asgari Taghanaki, Saeid
    Abhishek, Kumar
    Cohen, Joseph Paul
    Cohen-Adad, Julien
    Hamarneh, Ghassan
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2021, 54 (01) : 137 - 178
  • [6] Chen LB, 2017, IEEE INT SYMP NANO, P1, DOI 10.1109/NANOARCH.2017.8053709
  • [7] LANet: Local Attention Embedding to Improve the Semantic Segmentation of Remote Sensing Images
    Ding, Lei
    Tang, Hao
    Bruzzone, Lorenzo
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (01): : 426 - 435
  • [8] Incorporating DeepLabv3+and object-based image analysis for semantic segmentation of very high resolution remote sensing images
    Du, Shouji
    Du, Shihong
    Liu, Bo
    Zhang, Xiuyuan
    [J]. INTERNATIONAL JOURNAL OF DIGITAL EARTH, 2021, 14 (03) : 357 - 378
  • [9] Dual Attention Network for Scene Segmentation
    Fu, Jun
    Liu, Jing
    Tian, Haijie
    Li, Yong
    Bao, Yongjun
    Fang, Zhiwei
    Lu, Hanqing
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3141 - 3149
  • [10] Garcia A, 2017, APPR DIGIT GAME STUD, V5, P1