From Easy to Hard: Learning Language-Guided Curriculum for Visual Question Answering on Remote Sensing Data

被引:45
作者
Yuan, Zhenghang [1 ]
Mou, Lichao [1 ,2 ]
Wang, Qi [3 ]
Zhu, Xiao Xiang [1 ,2 ]
机构
[1] Tech Univ Munich TUM, Chair Data Sci Earth Observat, D-80333 Munich, Germany
[2] German Aerosp Ctr DLR, Remote Sensing Technol Inst IMF, D-82234 Wessling, Germany
[3] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2022年 / 60卷
基金
欧洲研究理事会;
关键词
Visualization; Task analysis; Feature extraction; Remote sensing; Computational modeling; Representation learning; Earth; self-paced curriculum learning (SPCL); spatial transformer; visual question answering (VQA); OBJECT DETECTION;
D O I
10.1109/TGRS.2022.3173811
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Visual question answering (VQA) for remote sensing scene has great potential in intelligent human-computer interaction system. Although VQA in computer vision has been widely researched, VQA for remote sensing data (RSVQA) is still in its infancy. There are two characteristics that need to be specially considered for the RSVQA task: 1) no object annotations are available in the RSVQA datasets, which makes it difficult for models to exploit informative region representation and 2) there are questions with clearly different difficulty levels for each image in the RSVQA task. Directly training a model with questions in a random order may confuse the model and limit the performance. To address these two problems, in this article, a multi-level visual feature learning method is proposed to jointly extract language-guided holistic and regional image features. Besides, a self-paced curriculum learning (SPCL)-based VQA model is developed to train networks with samples in an easy-to-hard way. To be more specific, a language-guided SPCL method with a soft weighting strategy is explored in this work. The proposed model is evaluated on three public datasets, and extensive experimental results show that the proposed RSVQA framework can achieve promising performance. Code will be available at https://gitlab.lrz.de/ai4eo/reasoning/VQA-easy2hard.
引用
收藏
页数:11
相关论文
共 55 条
[11]  
Fukui A., 2016, P 2016 C EMP METH NA, DOI DOI 10.18653/V1/D16-1044
[12]   Automatic Road Detection and Centerline Extraction via Cascaded End-to-End Convolutional Neural Network [J].
Cheng, Guangliang ;
Wang, Ying ;
Xu, Shibiao ;
Wang, Hongzhen ;
Xiang, Shiming ;
Pan, Chunhong .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2017, 55 (06) :3322-3337
[13]  
Han H., 2021, J. Phys. Conf. Ser., V2083
[14]  
Huang PP, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P3595
[15]  
Jaderberg M, 2015, ADV NEUR IN, V28
[16]   In Defense of Grid Features for Visual Question Answering [J].
Jiang, Huaizu ;
Misra, Ishan ;
Rohrbach, Marcus ;
Learned-Miller, Erik ;
Chen, Xinlei .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :10264-10273
[17]  
Jiang L, 2015, AAAI CONF ARTIF INTE, P2694
[18]  
Kim H, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P3606
[19]  
Kumar M.P., 2010, NEURIPS, P1189
[20]   Object detection in optical remote sensing images: A survey and a new benchmark [J].
Li, Ke ;
Wan, Gang ;
Cheng, Gong ;
Meng, Liqiu ;
Han, Junwei .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2020, 159 :296-307