Building Extraction from Remote Sensing Images with Sparse Token Transformers

被引:146
作者
Chen, Keyan [1 ,2 ,3 ]
Zou, Zhengxia [4 ]
Shi, Zhenwei [1 ,2 ,3 ]
机构
[1] Beihang Univ, Sch Astronaut, Image Proc Ctr, Beijing 100191, Peoples R China
[2] Beihang Univ, Beijing Key Lab Digital Media, Beijing 100191, Peoples R China
[3] Beihang Univ, Sch Astronaut, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
[4] Univ Michigan, Dept Computat Med & Bioinformat, Ann Arbor, MI 48109 USA
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
remote sensing images; building extraction; transformers; sparse token sampler; EFFICIENT NETWORK; CLASSIFICATION; NET;
D O I
10.3390/rs13214441
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Deep learning methods have achieved considerable progress in remote sensing image building extraction. Most building extraction methods are based on Convolutional Neural Networks (CNN). Recently, vision transformers have provided a better perspective for modeling long-range context in images, but usually suffer from high computational complexity and memory usage. In this paper, we explored the potential of using transformers for efficient building extraction. We design an efficient dual-pathway transformer structure that learns the long-term dependency of tokens in both their spatial and channel dimensions and achieves state-of-the-art accuracy on benchmark building extraction datasets. Since single buildings in remote sensing images usually only occupy a very small part of the image pixels, we represent buildings as a set of "sparse " feature vectors in their feature space by introducing a new module called "sparse token sampler ". With such a design, the computational complexity in transformers can be greatly reduced over an order of magnitude. We refer to our method as Sparse Token Transformers (STT). Experiments conducted on the Wuhan University Aerial Building Dataset (WHU) and the Inria Aerial Image Labeling Dataset (INRIA) suggest the effectiveness and efficiency of our method. Compared with some widely used segmentation methods and some state-of-the-art building extraction methods, STT has achieved the best performance with low time cost.
引用
收藏
页数:22
相关论文
共 72 条
[1]  
[Anonymous], U NET CONVOLUTIONAL, DOI DOI 10.1007/978-3-031-20364-0_45
[2]  
[Anonymous], 2015, P IEEE C COMP VIS PA
[3]  
Awrangjeb M, 2011, INT ARCH PHOTOGRAMM, V38-3, P143
[4]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[5]   Vision Transformers for Remote Sensing Image Classification [J].
Bazi, Yakoub ;
Bashmal, Laila ;
Rahhal, Mohamad M. Al ;
Dayil, Reham Al ;
Ajlan, Naif Al .
REMOTE SENSING, 2021, 13 (03) :1-20
[6]  
Beal J., 2020, 201209958 ARXIV
[7]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[8]  
Chen H., 2021, IET RENEW POWER GEN, P1, DOI [10.1109/TGRS.2021.3066802, DOI 10.1049/rpg2.12114]
[9]   Remote Sensing Image Change Detection With Transformers [J].
Chen, Hao ;
Qi, Zipeng ;
Shi, Zhenwei .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
[10]  
Chen KQ, 2017, INT GEOSCI REMOTE SE, P1672, DOI 10.1109/IGARSS.2017.8127295