Visual Grounding in Remote Sensing Images

被引:23
作者
Sun, Yuxi [1 ]
Feng, Shanshan [1 ]
Li, Xutao [1 ]
Ye, Yunming [1 ]
Kang, Jian [2 ]
Huang, Xu [1 ]
机构
[1] Harbin Inst Technol, Shenzhen, Peoples R China
[2] Soochow Univ, Suzhou, Peoples R China
来源
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022 | 2022年
基金
中国国家自然科学基金;
关键词
dataset; object retrieval; visual grounding; remote sensing; referring expression;
D O I
10.1145/3503161.3548316
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Ground object retrieval from a large-scale remote sensing image is very important for lots of applications. We present a novel problem of visual grounding in remote sensing images. Visual grounding aims to locate the particular objects (in the form of the bounding box or segmentation mask) in an image by a natural language expression. The task already exists in the computer vision community. However, existing benchmark datasets and methods mainly focus on natural images rather than remote sensing images. Compared with natural images, remote sensing images contain large-scale scenes and the geographical spatial information of ground objects (e.g., longitude, latitude). The existing method cannot deal with these challenges. In this paper, we collect a new visual grounding dataset, called RSVG, and design a new method, namely GeoVG. In particular, the proposed method consists of a language encoder, image encoder, and fusion module. The language encoder is used to learn numerical geospatial relations and represent a complex expression as a geospatial relation graph. The image encoder is applied to learn large-scale remote sensing scenes with adaptive region attention. The fusion module is used to fuse the text and image feature for visual grounding. We evaluate the proposed method by comparing it to the state-of-the-art methods on RSVG. Experiments show that our method outperforms the previous methods on the proposed datasets. https://sunyuxi.github.io/publication/GeoVG
引用
收藏
页数:9
相关论文
共 40 条
  • [21] Modeling Context Between Objects for Referring Expression Understanding
    Nagaraja, Varun K.
    Morariu, Vlad I.
    Davis, Larry S.
    [J]. COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 : 792 - 807
  • [22] Spithourakis GP, 2018, Arxiv, DOI arXiv:1805.08154
  • [23] Perez E, 2018, AAAI CONF ARTIF INTE, P3942
  • [24] Qi P, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020): SYSTEM DEMONSTRATIONS, P101
  • [25] Language-Aware Fine-Grained Object Representation for Referring Expression Comprehension
    Qiu, Heqian
    Li, Hongliang
    Wu, Qingbo
    Meng, Fanman
    Shi, Hengcan
    Zhao, Taijin
    Ngan, King Ngi
    [J]. MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 4171 - 4180
  • [26] Redmon J., 2018, arXiv
  • [27] Speer R, 2017, AAAI CONF ARTIF INTE, P4444
  • [28] Multisensor Fusion and Explicit Semantic Preserving-Based Deep Hashing for Cross-Modal Remote Sensing Image Retrieval
    Sun, Yuxi
    Feng, Shanshan
    Ye, Yunming
    Li, Xutao
    Kang, Jian
    Huang, Zhichao
    Luo, Chuyao
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [29] Give Me Something to Eat: Referring Expression Comprehension with Commonsense Knowledge
    Wang, Peng
    Liu, Dongyang
    Li, Hui
    Wu, Qi
    [J]. MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 28 - 36
  • [30] Wolf T, 2020, Arxiv, DOI [arXiv:1910.03771, DOI 10.48550/ARXIV.1910.03771]