Facial expression recognition via weighted group sparsity

被引:1
作者
Zheng, Hao [1 ,2 ]
Geng, Xin [2 ]
机构
[1] Nanjing XiaoZhuang Univ, Sch Informat Engn, Key Lab Trusted Cloud Comp & Big Data Anal, Nanjing 211171, Jiangsu, Peoples R China
[2] Southeast Univ, Sch Comp Sci & Engn, Nanjing 211189, Jiangsu, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
facial expression recognition; multi-task learning; group sparsity; CLASSIFICATION; DIMENSIONALITY; SEQUENCES;
D O I
10.1007/s11704-016-5204-4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Considering the distinctiveness of different group features in the sparse representation, a novel joint multi-task and weighted group sparsity (JMT-WGS) method is proposed. By weighting popular group sparsity, not only the representation coefficients from the same class over their associate dictionaries may share some similarity, but also the representation coefficients from different classes have enough diversity. The proposed method is cast into a multi-task framework with two-stage iteration. In the first stage, representation coefficient can be optimized by accelerated proximal gradient method when the weights are fixed. In the second stage, the weights are computed via the prior information about their entropy. The experimental results on three facial expression databases show that the proposed algorithm outperforms other state-of-the-art algorithms and demonstrate the promising performance of the proposed algorithm.
引用
收藏
页码:266 / 275
页数:10
相关论文
共 42 条
[1]  
Ando RK, 2005, J MACH LEARN RES, V6, P1817
[2]  
[Anonymous], P IEEE C COMP VIS IC
[3]  
[Anonymous], P IEEE INT C DAT MIN
[4]  
[Anonymous], 2015, P IEEE INT C AUT FAC, DOI DOI 10.1109/FG.2015.7163082
[5]  
[Anonymous], 2006, Journal of the Royal Statistical Society, Series B
[6]  
[Anonymous], TECHNICAL REPORT
[7]  
[Anonymous], P EUR C COMP VIS
[8]  
[Anonymous], P 10 ACM SIGKDD INT
[9]  
[Anonymous], P IEEE C COMP VIS PA
[10]   Task clustering and gating for Bayesian multitask learning [J].
Bakker, B ;
Heskes, T .
JOURNAL OF MACHINE LEARNING RESEARCH, 2004, 4 (01) :83-99