High-Resolution Face Fusion for Gender Conversion

被引:25
作者
Suo, Jinli [1 ]
Lin, Liang [2 ]
Shan, Shiguang [3 ,4 ]
Chen, Xilin [3 ,4 ]
Gao, Wen [5 ]
机构
[1] Chinese Acad Sci, Grad Univ, Beijing 100190, Peoples R China
[2] Sun Yat Sen Univ, Sch Software, Guangzhou 510275, Guangdong, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
[4] Chinese Acad Sci, Inst Comp Technol, ICT ISVISION Joint Res & Dev Lab Face Recognit, Beijing 100190, Peoples R China
[5] Peking Univ, Sch Elect Engn & Comp Sci, Beijing 100871, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS | 2011年 / 41卷 / 02期
基金
中国国家自然科学基金;
关键词
And-Or graph; face fusion; gender conversion; OBJECT RECOGNITION; IMAGE FUSION; CLASSIFICATION; APPEARANCE; SHAPE;
D O I
10.1109/TSMCA.2010.2064304
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents an integrated face image fusion framework, which combines a hierarchical compositional paradigm with seamless image-editing techniques, for gender conversion. In our framework a high-resolution face is represented by a probabilistic graphical model that decomposes a human face into several parts (facial components) constrained by explicit spatial configurations (relationships). Benefiting from this representation, the proposed fusion strategy is able to largely preserve the face identity of each facial component while applying gender transformation. Given a face image, the basic idea is to select reference facial components from the opposite-gender group as templates and transform the appearance of the given image toward the selected facial components. Our fusion approach decomposes a face image into two parts-sketchable and nonsketchable ones. For the sketchable regions (e. g., the contours of facial components and wrinkle lines, etc.), we use a graph-matching algorithm to find the best templates and transform the structure (shape), while for the nonsketchable regions (e. g., the texture area of facial components, skin, etc.), we learn active appearance models and transform the texture attributes in the corresponding principal component analysis space. Both objective and subjective quantitative evaluation results on 200 Asian frontal-face images selected from the public Lotus Hill Image database show that the proposed approach is able to give plausible gender conversion results.
引用
收藏
页码:226 / 237
页数:12
相关论文
共 52 条
[1]  
[Anonymous], 2007, IEEE COMPUTER VISION
[2]   Boosting sex identification performance [J].
Baluja, Shumeet ;
Rowley, Henry A. .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2007, 71 (01) :111-119
[3]   Shape matching and object recognition using shape contexts [J].
Belongie, S ;
Malik, J ;
Puzicha, J .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2002, 24 (04) :509-522
[4]  
Cao W, 2003, PROCEEDINGS OF 2003 INTERNATIONAL CONFERENCE ON NEURAL NETWORKS & SIGNAL PROCESSING, PROCEEDINGS, VOLS 1 AND 2, P976
[5]  
CHEN H, 2005, P INT C COMP VIS PAT, P943
[6]   A multiresolution image fusion based on principle component analysis [J].
Chen, Huaixin .
Proceedings of the Fourth International Conference on Image and Graphics, 2007, :737-741
[7]  
Chipman LJ, 1995, INTERNATIONAL CONFERENCE ON IMAGE PROCESSING - PROCEEDINGS, VOLS I-III, pC248
[8]   Active appearance models [J].
Cootes, TF ;
Edwards, GJ ;
Taylor, CJ .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2001, 23 (06) :681-685
[9]   Sparse models for gender classification [J].
Costen, NP ;
Brown, M ;
Akamatsu, S .
SIXTH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, PROCEEDINGS, 2004, :201-206
[10]   Gender discrimination and prediction on the basis of facial metric information [J].
Fellous, JM .
VISION RESEARCH, 1997, 37 (14) :1961-1973