Uncertainty Estimation With Neural Processes for Meta-Continual Learning

被引:7
作者
Wang, Xuesong [1 ]
Yao, Lina [2 ,3 ]
Wang, Xianzhi [4 ]
Paik, Hye-Young [1 ]
Wang, Sen [5 ]
机构
[1] Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 2052, Australia
[2] Univ New South Wales, CSIROs Data61, Sydney, NSW 2052, Australia
[3] Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 2052, Australia
[4] Univ Technol Sydney, Sch Comp Sci, Sydney, NSW 2007, Australia
[5] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Qld 4072, Australia
关键词
Continual learning; COVID forecast; evolving data streams; meta-learning; neural processes (NPs); uncertainty estimation;
D O I
10.1109/TNNLS.2022.3215633
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ability to evaluate uncertainties in evolving data streams has become equally, if not more, crucial than building a static predictor. For instance, during the pandemic, a model should consider possible uncertainties such as governmental policies, meteorological features, and vaccination schedules. Neural process families (NPFs) have recently shone a light on predicting such uncertainties by bridging Gaussian processes (GPs) and neural networks (NNs). Their abilities to output average predictions and the acceptable variances, i.e., uncertainties, made them suitable for predictions with insufficient data, such as meta-learning or few-shot learning. However, existing models have not addressed continual learning which imposes a stricter constraint on the data access. Regarding this, we introduce a member meta-continual learning with neural process (MCLNP) for uncertainty estimation. We enable two levels of uncertainty estimations: the local uncertainty on certain points and the global uncertainty p(z) that represents the function evolution in dynamic environments. To facilitate continual learning, we hypothesize that the previous knowledge can be applied to the current task, hence adopt a coreset as a memory buffer to alleviate catastrophic forgetting. The relationships between the degree of global uncertainties with the intratask diversity and model complexity are discussed. We have estimated prediction uncertainties with multiple evolving types including abrupt/gradual/recurrent shifts. The applications encompass meta-continual learning in the 1-D, 2-D datasets, and a novel spatial-temporal COVID dataset. The results show that our method outperforms the baselines on the likelihood and can rebound quickly even for heavily evolved data streams.
引用
收藏
页码:6887 / 6897
页数:11
相关论文
共 38 条
[1]  
Rusu AA, 2019, Arxiv, DOI arXiv:1807.05960
[2]  
Bauer M., 2019, ICLR
[3]  
Caccia Massimo, 2020, Advances in Neural Information Processing Systems
[4]  
Carr A, 2019, Arxiv, DOI arXiv:1902.10042
[5]  
Chen XC, 2020, Arxiv, DOI arXiv:2004.05645
[6]  
Damianou A., 2013, Artificial intelligence and statistics, P207
[7]   A Continual Learning Survey: Defying Forgetting in Classification Tasks [J].
De Lange, Matthias ;
Aljundi, Rahaf ;
Masana, Marc ;
Parisot, Sarah ;
Jia, Xu ;
Leonardis, Ales ;
Slabaugh, Greg ;
Tuytelaars, Tinne .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) :3366-3385
[8]  
Dubois Y., 2020, Neural Process Family
[9]  
Finn C, 2017, PR MACH LEARN RES, V70
[10]  
Foong A., 2020, Adv. Neu. Info. Proc. Sys., V33, P1