Decomposing feature-level variation with Covariate Gaussian Process Latent Variable Models

被引:0
|
作者
Martens, Kaspar [1 ]
Campbell, Kieran R. [2 ,3 ,4 ]
Yau, Christopher [5 ,6 ]
机构
[1] Univ Oxford, Dept Stat, Oxford, England
[2] Univ British Columbia, Dept Stat, Vancouver, BC, Canada
[3] BC Canc Agcy, Vancouver, BC, Canada
[4] UBC Data Sci Inst, Vancouver, BC, Canada
[5] Alan Turing Inst, London, England
[6] Univ Birmingham, Inst Canc & Genom Sci, Birmingham, W Midlands, England
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97 | 2019年 / 97卷
基金
英国医学研究理事会; 英国工程与自然科学研究理事会; 加拿大健康研究院;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The interpretation of complex high-dimensional data typically requires the use of dimensionality reduction techniques to extract explanatory low-dimensional representations. However, in many real-world problems these representations may not be sufficient to aid interpretation on their own, and it would be desirable to interpret the model in terms of the original features themselves. Our goal is to characterise how feature-level variation depends on latent low-dimensional representations, external covariates, and non-linear interactions between the two. In this paper, we propose to achieve this through a structured kernel decomposition in a hybrid Gaussian Process model which we call the Covariate Gaussian Process Latent Variable Model (c-GPLVM). We demonstrate the utility of our model on simulated examples and applications in disease progression modelling from high-dimensional gene expression data in the presence of additional phenotypes. In each setting we show how the c-GPLVM can extract low-dimensional structures from high-dimensional data sets whilst allowing a breakdown of feature-level variability that is not present in other commonly used dimensionality reduction approaches.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Estimation and visualization of process states using latent variable models based on Gaussian process
    Kaneko, Hiromasa
    ANALYTICAL SCIENCE ADVANCES, 2021, 2 (5-6): : 326 - 333
  • [22] Shaking Hands in Latent Space Modeling Emotional Interactions with Gaussian Process Latent Variable Models
    Taubert, Nick
    Endres, Dominik
    Christensen, Andrea
    Giese, Martin A.
    KI 2011: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2011, 7006 : 330 - +
  • [23] Feature-level Approach for the Evaluation of Text Classification Models
    Bracamonte, Vanessa
    Hidano, Seira
    Nakamura, Toru
    Kiyomoto, Shinsaku
    PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (IVAPP), VOL 3, 2022, : 164 - 170
  • [24] Learning GP-BayesFilters via Gaussian process latent variable models
    Ko, Jonathan
    Fox, Dieter
    AUTONOMOUS ROBOTS, 2011, 30 (01) : 3 - 23
  • [25] Learning GP-BayesFilters via Gaussian process latent variable models
    Jonathan Ko
    Dieter Fox
    Autonomous Robots, 2011, 30 : 3 - 23
  • [26] Generation of Stochastic Interconnect Responses via Gaussian Process Latent Variable Models
    De Ridder, Simon
    Deschrijver, Dirk
    Manfredi, Paolo
    Dhaene, Tom
    Vande Ginste, Dries
    IEEE TRANSACTIONS ON ELECTROMAGNETIC COMPATIBILITY, 2019, 61 (02) : 582 - 585
  • [27] Shared Gaussian Process Latent Variable Models for Handling Ambiguous Facial Expressions
    Ek, Carl Henrik
    Jaeckel, Peter
    Campbell, Neill
    Lawrence, Neil D.
    Melhuish, Chris
    INTELLIGENT SYSTEMS AND AUTOMATION, 2009, 1107 : 147 - +
  • [28] Distributed Variational Inference in Sparse Gaussian Process Regression and Latent Variable Models
    Gal, Yarin
    van der Wilk, Mark
    Rasmussen, Carl E.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [29] Pseudo-marginal Bayesian inference for Gaussian process latent variable models
    Gadd, C.
    Wade, S.
    Shah, A. A.
    MACHINE LEARNING, 2021, 110 (06) : 1105 - 1143
  • [30] Pseudo-marginal Bayesian inference for Gaussian process latent variable models
    C. Gadd
    S. Wade
    A. A. Shah
    Machine Learning, 2021, 110 : 1105 - 1143