Learning Representations of Satellite Images From Metadata Supervision

被引:0
作者
Bourcier, Jules [1 ,2 ]
Dashyan, Gohar [1 ]
Alahari, Karteek [2 ]
Chanussot, Jocelyn [2 ]
机构
[1] Preligens, Paris, France
[2] Univ Grenoble Alpes, CNRS, INRIA, Grenoble INP,LJK, Grenoble, France
来源
COMPUTER VISION - ECCV 2024, PT XXVII | 2025年 / 15085卷
关键词
Self-supervised and multimodal learning; Remote sensing; BENCHMARK; CLASSIFICATION;
D O I
10.1007/978-3-031-73383-3_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning is increasingly applied to Earth observation problems that leverage satellite and other remotely sensed data. Within satellite imagery, metadata such as time and location often hold significant semantic information that improves scene understanding. In this paper, we introduce Satellite Metadata-Image Pretraining (SatMIP), a new approach for harnessing metadata in the pretraining phase through a flexible and unified multimodal learning objective. SatMIP represents metadata as textual captions and aligns images with metadata in a shared embedding space by solving a metadata-image contrastive task. Our model learns a non-trivial image representation that can effectively handle recognition tasks. We further enhance this model by combining image self-supervision and metadata supervision, introducing SatMIPS. As a result, SatMIPS improves over its image-image pretraining baseline, SimCLR, and accelerates convergence. Comparison against four recent contrastive and masked autoencoding-based methods for remote sensing also highlight the efficacy of our approach. Furthermore, our framework enables multimodal classification with metadata to improve the performance of visual features, and yields more robust hierarchical pretraining. Code and pretrained models will be made available at: https://github.com/preligens-lab/satmip.
引用
收藏
页码:54 / 71
页数:18
相关论文
共 62 条
[1]  
[Anonymous], 2010, PROC 18 SIGSPATIAL I, DOI DOI 10.1145/1869790.1869829
[2]   Multimodal Deep Networks for Text and Image-Based Document Classification [J].
Audebert, Nicolas ;
Herold, Catherine ;
Slimani, Kuider ;
Vidal, Cedric .
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT I, 2020, 1167 :427-443
[3]   Geography-Aware Self-Supervised Learning [J].
Ayush, Kumar ;
Uzkent, Burak ;
Meng, Chenlin ;
Tanmay, Kumar ;
Burke, Marshall ;
Lobell, David ;
Ermon, Stefano .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :10161-10170
[4]   Visual and Textual Deep Feature Fusion for Document Image Classification [J].
Bakkali, Souhail ;
Ming, Zuheng ;
Coustaty, Mickael ;
Rusinol, Marcal .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :2394-2403
[5]   STORM-GAN: Spatio-Temporal Meta-GAN for Cross-City Estimation of Human Mobility Responses to COVID- [J].
Bao, Han ;
Zhou, Xun ;
Xie, Yiqun ;
Li, Yanhua ;
Jia, Xiaowei .
2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, :1-10
[6]  
Bourcier J., Image and Signal Processing for Remote Sensing XXVIII, V12267, P152
[7]   Emerging Properties in Self-Supervised Vision Transformers [J].
Caron, Mathilde ;
Touvron, Hugo ;
Misra, Ishan ;
Jegou, Herve ;
Mairal, Julien ;
Bojanowski, Piotr ;
Joulin, Armand .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9630-9640
[8]  
Cepeda VV, 2023, ADV NEUR IN
[9]  
Chen T, 2020, PR MACH LEARN RES, V119
[10]  
Chen XL, 2020, Arxiv, DOI arXiv:2003.04297