Scalable Exact Inference in Multi-Output Gaussian Processes
被引:0
作者:
Bruinsma, Wessel P.
论文数: 0引用数: 0
h-index: 0
机构:
Univ Cambridge, Cambridge, England
Invenia Labs, Cambridge, EnglandUniv Cambridge, Cambridge, England
Bruinsma, Wessel P.
[1
,2
]
Perim, Eric
论文数: 0引用数: 0
h-index: 0
机构:
Invenia Labs, Cambridge, EnglandUniv Cambridge, Cambridge, England
Perim, Eric
[2
]
Tebbutt, Will
论文数: 0引用数: 0
h-index: 0
机构:
Univ Cambridge, Cambridge, EnglandUniv Cambridge, Cambridge, England
Tebbutt, Will
[1
]
Hosking, J. Scott
论文数: 0引用数: 0
h-index: 0
机构:
British Antarctic Survey, Cambridge, England
Alan Turing Inst, London, EnglandUniv Cambridge, Cambridge, England
Hosking, J. Scott
[3
,4
]
论文数: 引用数:
h-index:
机构:
Solin, Arno
[5
]
Turner, Richard E.
论文数: 0引用数: 0
h-index: 0
机构:
Univ Cambridge, Cambridge, England
Microsoft Res, Redmond, WA USAUniv Cambridge, Cambridge, England
Turner, Richard E.
[1
,6
]
机构:
[1] Univ Cambridge, Cambridge, England
[2] Invenia Labs, Cambridge, England
[3] British Antarctic Survey, Cambridge, England
[4] Alan Turing Inst, London, England
[5] Aalto Univ, Espoo, Finland
[6] Microsoft Res, Redmond, WA USA
来源:
25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019)
|
2019年
基金:
英国工程与自然科学研究理事会;
芬兰科学院;
关键词:
CLIMATE;
D O I:
暂无
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
Multi-output Gaussian processes (MOGPs) leverage the flexibility and interpretability of GPs while capturing structure across outputs, which is desirable, for example, in spatio-temporal modelling. The key problem with MOGPs is their computational scaling O(n(3)p(3)), which is cubic in the number of both inputs n (e.g., time points or locations) and outputs p. For this reason, a popular class of MOGPs assumes that the data live around a low-dimensional linear subspace, reducing the complexity to O (n(3)m(3)). However, this cost is still cubic in the dimensionality of the subspace m, which is still prohibitively expensive for many applications. We propose the use of a sufficient statistic of the data to accelerate inference and learning in MOGPs with orthogonal bases. The method achieves linear scaling in m in practice, allowing these models to scale to large m without sacrificing significant expressivity or requiring approximation. This advance opens up a wide range of real-world tasks and can be combined with existing GP approximations in a plug-and-play way. We demonstrate the efficacy of the method on various synthetic and real-world data sets.