Mixture of linear experts model for censored data: A novel approach with scale-mixture of normal distributions

被引:16
作者
Mirfarah, Elham [1 ]
Naderi, Mehrdad [1 ]
Chen, Ding-Geng [1 ]
机构
[1] Univ Pretoria, Fac Nat & Agr Sci, Dept Stat, Pretoria, South Africa
基金
英国医学研究理事会; 新加坡国家研究基金会;
关键词
Mixture of linear experts model; Scale-mixture of normal class of distributions; EM-type algorithm; Censored data; MEAN-VARIANCE MIXTURE; REGRESSION-MODELS; MAXIMUM-LIKELIHOOD; FINITE MIXTURE; ROBUST MIXTURE; EM ALGORITHM; OF-EXPERTS; INFERENCE; ECM;
D O I
10.1016/j.csda.2021.107182
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Mixture of linear experts (MoE) model is one of the widespread statistical frameworks for modeling, classification, and clustering of data. Built on the normality assumption of the error terms for mathematical and computational convenience, the classical MoE model has two challenges: (1) it is sensitive to atypical observations and outliers, and (2) it might produce misleading inferential results for censored data. The aim is then to resolve these two challenges, simultaneously, by proposing a robust MoE model for model-based clustering and discriminant censored data with the scale-mixture of normal (SMN) class of distributions for the unobserved error terms. An analytical expectation-maximization (EM) type algorithm is developed in order to obtain the maximum likelihood parameter estimates. Simulation studies are carried out to examine the performance, effectiveness, and robustness of the proposed methodology. Finally, a real dataset is used to illustrate the superiority of the new model. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:19
相关论文
共 50 条