From Easy to Hard: Learning Language-Guided Curriculum for Visual Question Answering on Remote Sensing Data

被引:45
作者
Yuan, Zhenghang [1 ]
Mou, Lichao [1 ,2 ]
Wang, Qi [3 ]
Zhu, Xiao Xiang [1 ,2 ]
机构
[1] Tech Univ Munich TUM, Chair Data Sci Earth Observat, D-80333 Munich, Germany
[2] German Aerosp Ctr DLR, Remote Sensing Technol Inst IMF, D-82234 Wessling, Germany
[3] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Peoples R China
来源
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING | 2022年 / 60卷
基金
欧洲研究理事会;
关键词
Visualization; Task analysis; Feature extraction; Remote sensing; Computational modeling; Representation learning; Earth; self-paced curriculum learning (SPCL); spatial transformer; visual question answering (VQA); OBJECT DETECTION;
D O I
10.1109/TGRS.2022.3173811
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Visual question answering (VQA) for remote sensing scene has great potential in intelligent human-computer interaction system. Although VQA in computer vision has been widely researched, VQA for remote sensing data (RSVQA) is still in its infancy. There are two characteristics that need to be specially considered for the RSVQA task: 1) no object annotations are available in the RSVQA datasets, which makes it difficult for models to exploit informative region representation and 2) there are questions with clearly different difficulty levels for each image in the RSVQA task. Directly training a model with questions in a random order may confuse the model and limit the performance. To address these two problems, in this article, a multi-level visual feature learning method is proposed to jointly extract language-guided holistic and regional image features. Besides, a self-paced curriculum learning (SPCL)-based VQA model is developed to train networks with samples in an easy-to-hard way. To be more specific, a language-guided SPCL method with a soft weighting strategy is explored in this work. The proposed model is evaluated on three public datasets, and extensive experimental results show that the proposed RSVQA framework can achieve promising performance. Code will be available at https://gitlab.lrz.de/ai4eo/reasoning/VQA-easy2hard.
引用
收藏
页数:11
相关论文
共 55 条
[1]   Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering [J].
Anderson, Peter ;
He, Xiaodong ;
Buehler, Chris ;
Teney, Damien ;
Johnson, Mark ;
Gould, Stephen ;
Zhang, Lei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6077-6086
[2]  
[Anonymous], 2018, IEEE T NEUR NET LEAR, DOI DOI 10.1109/TNNLS.2018.2817340
[3]  
[Anonymous], 2016, ICLR
[4]   VQA: Visual Question Answering [J].
Antol, Stanislaw ;
Agrawal, Aishwarya ;
Lu, Jiasen ;
Mitchell, Margaret ;
Batra, Dhruv ;
Zitnick, C. Lawrence ;
Parikh, Devi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2425-2433
[5]   MUTAN: Multimodal Tucker Fusion for Visual Question Answering [J].
Ben-younes, Hedi ;
Cadene, Remi ;
Cord, Matthieu ;
Thome, Nicolas .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2631-2639
[6]  
Bengio Y., 2009, P 26 ANN INT C MACH, P41, DOI DOI 10.1145/1553374.1553380
[7]  
Chen K., 2015, Abc-cnn: An attention based convolutional neural network for visual question answering
[8]   Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs [J].
Chen, Shizhe ;
Jin, Qin ;
Wang, Peng ;
Wu, Qi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :9959-9968
[9]   A survey on object detection in optical remote sensing images [J].
Cheng, Gong ;
Han, Junwei .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2016, 117 :11-28
[10]   Multiple Interaction Learning with Question-Type Prior Knowledge for Constraining Answer Search Space in Visual Question Answering [J].
Do, Tuong ;
Nguyen, Binh X. ;
Tran, Huy ;
Tjiputra, Erman ;
Tran, Quang D. ;
Do, Thanh-Toan .
COMPUTER VISION - ECCV 2020 WORKSHOPS, PT II, 2020, 12536 :496-510