A novel feature selection method using generalized inverted Dirichlet-based HMMs for image categorization

被引:0
作者
Rim Nasfi
Nizar Bouguila
机构
[1] Concordia Institute for Information Systems Engineering,
来源
International Journal of Machine Learning and Cybernetics | 2022年 / 13卷
关键词
Hidden Markov models; Generalized inverted Dirichlet; Feature selection; Automatic recognition; Facial expressions recognition; Scene categorization;
D O I
暂无
中图分类号
学科分类号
摘要
Hidden Markov Models (HMMs) have consistently been a powerful tool for performing numerous challenging machine learning tasks such as automatic recognition. The latter perceives all objects of the universe through information carried by their characteristics or features. However, not all available data is always valuable for distinguishing between the different objects, scenes, scenarios; referring analogically to states. More often than not, automatic recognition is accompanied by a feature selection to reduce the number of collected features to a relevant subset. Although sparse, the majority of literature resources available on feature selection for HMMs, presuppose either a single Gaussian or employ a Gaussian mixture model (GMM) as emission distribution. The proposed method builds upon the feature saliency model introduced by Adams, Cogill, and Beling (in IEEE Access 4:1642–1657), and is adjusted to handle complex multidimensional data by using as a novel experiment, GID (Generalized Inverted Dirichlet) mixture models) as emission probabilities. We make use of an Expectation-Maximization (EM) algorithm (Dempster et al. in J R Stat Soc 39(1):1–22) to compute maximum a posteriori (MAP) [Gauvain and Lee in IEEE Transact Speech Audio Process 2(2):291–298] estimates for model parameters. The complete inference and parameter estimation of our GID-FSHMM (GID Feature Selection-based HMM) are detailed in this work. Automatic recognition applications such as facial expression recognition and scenes categorization demonstrate comparable to higher performance compared to the extensively used Gaussian mixture-based HMM (GHMM), the Dirichlet-based (DHMM) and the inverted Dirichlet-based HMM (IDHMM) without feature selection and also when the latter is embedded in all of the aforementioned models.
引用
收藏
页码:2365 / 2381
页数:16
相关论文
共 134 条
[1]  
Adams S(2019)A survey of feature selection methods for gaussian mixture models and hidden markov models Artificial Intelligence Rev 52 1739-1779
[2]  
Beling Peter A(2016)Feature selection for hidden markov models and hidden semi-markov models IEEE Access 4 1642-1657
[3]  
Adams S(2014)simultaneous positive data clustering and unsupervised feature selection using generalized inverted dirichlet mixture models Knowl-Based Syst 59 182-195
[4]  
Beling PA(2005)Deciphering the enigmatic face: the importance of facial dynamics in interpreting subtle facial expressions Psychol Sci 16 403-410
[5]  
Cogill R(2007)Object trajectory-based activity classification and recognition using hidden markov models IEEE Transact Image Process 16 1912-1919
[6]  
Al Mashrgy M(1966)Baum and Ted Petrie Statistical inference for probabilistic functions of finite state markov chains Ann Math Statist 37 1554-1563
[7]  
Taoufik Bdiri(1998)A gentle tutorial of the em algorithm and its application to parameter estimation for gaussian mixture and hidden markov models Int Comput Sci Institut 4 126-271
[8]  
Robust Bouguila Nizar(1997)Selection of relevant features and examples in machine learning Artificial Intelligence 97 245-211
[9]  
Zara Ambadar(2019)Engagement detection in online learning: a review Smart Learning Environments 6 1-54630
[10]  
Schooler Jonathan W(1998)Statistical models of face images - improving specificity Image and Vision Computing 16 203-1202