UNSUPERVISED CLASSIFICATION OF EXTREME FACIAL EVENTS USING ACTIVE APPEARANCE MODELS TRACKING FOR SIGN LANGUAGE VIDEOS

被引:0
作者
Antonakos, Epameinondas [1 ]
Pitsikalis, Vassilis [1 ]
Rodomagoulakis, Isidoros [1 ]
Maragos, Petros [1 ]
机构
[1] Natl Tech Univ Athens, Sch ECE, GR-10682 Athens, Greece
来源
2012 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2012) | 2012年
关键词
Sign language videos; Active Appearance Models; face tracking/modeling; head pose; unsupervised classification;
D O I
暂无
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
We propose an Unsupervised method for Extreme States Classification (UnESC) on feature spaces of facial cues of interest. The method is built upon Active Appearance Models (AAM) face tracking and on feature extraction of Global and Local AAMs. UnESC is applied primarily on facial pose, but is shown to be extendable for the case of local models on the eyes and mouth. Given the importance of facial events in Sign Languages we apply the UnESC on videos from two sign language corpora, both American (ASL) and Greek (GSL) yielding promising qualitative and quantitative results. Apart from the detection of extreme facial states, the proposed UnESC also has impact for SL corpora lacking any facial annotations.
引用
收藏
页码:1409 / 1412
页数:4
相关论文
共 12 条