Meta-Learning and Self-Supervised Pretraining for Storm Event Imagery Translation

被引:0
作者
Rugina, Ileana [1 ]
Dangovski, Rumen [1 ]
Simek, Olga [2 ]
Veillette, Mark [2 ]
Khorrami, Pooya [2 ,3 ]
Soljacic, Marin
Cheung, Brian [4 ]
机构
[1] MIT EECS, Cambridge, MA 02139 USA
[2] MIT Lincoln Lab, Lexington, MA USA
[3] MIT Phys, Cambridge, MA USA
[4] MIT CSAIL & BCS, Cambridge, MA USA
来源
2023 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE, HPEC | 2023年
基金
美国国家科学基金会;
关键词
few-shot learning; self-supervised learning; meta-learning; generative adversarial networks;
D O I
10.1109/HPEC58863.2023.10363448
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recent advances in deep learning have provided impressive results across a wide range of computational problems such as computer vision, natural language, or reinforcement learning. However, many of these improvements are constrained to problems with large-scale curated datasets which require a lot of human labor to gather. Additionally, these models tend to generalize poorly under both slight distributional shifts and low-data regimes. In recent years, emerging fields such as meta-learning and self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of machine learning by extending deep learning to the semi-supervised and few-shot domains. We follow this line of work and explore spatiotemporal structure in a recently introduced image-to-image translation problem for storm event imagery in order to: i) formulate a novel multi-task few-shot image generation benchmark in the field of AI for Earth and Space Science and ii) explore data augmentations in contrastive pretraining for image translation downstream tasks. We present several baselines for the few-shot problem and discuss trade-offs between different approaches. Our implementation and instructions to reproduce the experiments, available at https://github.com/irugina/meta-image-translation, are thoroughly tested on MIT SuperCloud, and scalable to other state-of-the-art HPC systems.
引用
收藏
页数:9
相关论文
共 35 条
[1]  
Arnold SMR, 2020, Arxiv, DOI arXiv:2008.12284
[2]  
Balestriero R, 2023, Arxiv, DOI [arXiv:2304.12210, DOI 10.48550/ARXIV.2304.12210]
[3]  
Barbu A, 2019, ADV NEUR IN, V32
[4]  
Chen Xinlei, 2021, arXiv
[5]  
Clouƒtre L, 2019, Arxiv, DOI arXiv:1901.02199
[6]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[7]   A Large-Scale Study on Unsupervised Spatiotemporal Representation Learning [J].
Feichtenhofer, Christoph ;
Fan, Haoqi ;
Xiong, Bo ;
Girshick, Ross ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :3298-3308
[8]  
Finn C, 2017, PR MACH LEARN RES, V70
[9]  
Gadepally Vijay, 2022, 2022 IEEE HIGH PERFO, P1, DOI [10.1109/HPEC55821. 2022.9991948, DOI 10.1109/HPEC55821.2022.9991948]
[10]  
Ho Jonathan., 2020, Adv neural Inf Process Syst, V33, P6840