Orthogonal Deep Features Decomposition for Age-Invariant Face Recognition

被引:80
|
作者
Wang, Yitong [1 ]
Gong, Dihong [1 ]
Zhou, Zheng [1 ]
Ji, Xing [1 ]
Wang, Hao [1 ]
Li, Zhifeng [1 ]
Liu, Wei [1 ]
Zhang, Tong [1 ]
机构
[1] Tencent AI Lab, Beijing, Peoples R China
来源
COMPUTER VISION - ECCV 2018, PT 15 | 2018年 / 11219卷
关键词
Age-invariant face recognition; Convolutional neural networks; Cross-age face dataset; PATTERNS;
D O I
10.1007/978-3-030-01267-0_45
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As facial appearance is subject to significant intra-class variations caused by the aging process over time, age-invariant face recognition (AIFR) remains a major challenge in face recognition community. To reduce the intra-class discrepancy caused by the aging, in this paper we propose a novel approach (namely, Orthogonal Embedding CNNs, or OE-CNNs) to learn the age-invariant deep face features. Specifically, we decompose deep face features into two orthogonal components to represent age-related and identity-related features. As a result, identity-related features that are robust to aging are then used for AIFR. Besides, for complementing the existing cross-age datasets and advancing the research in this field, we construct a brand-new large-scale Cross-Age Face dataset (CAF). Extensive experiments conducted on the three public domain face aging datasets (MORPH Album 2, CACD-VS and FG-NET) have shown the effectiveness of the proposed approach and the value of the constructed CAF dataset on AIFR. Benchmarking our algorithm on one of the most popular general face recognition (GFR) dataset LFW additionally demonstrates the comparable generalization performance on GFR.
引用
收藏
页码:764 / 779
页数:16
相关论文
共 50 条
  • [21] Age Invariant Face Recognition Using Minimal Geometrical Facial Features
    Bijarnia, Saroj
    Singh, Preety
    ADVANCED COMPUTING AND COMMUNICATION TECHNOLOGIES, 2016, 452 : 71 - 77
  • [22] Deep Component Based Age Invariant Face Recognition in an Unconstrained Environment
    Asif, Amad
    Tahir, Muhammad Atif
    Ali, Mohsin
    ADVANCES IN COMPUTATIONAL COLLECTIVE INTELLIGENCE (ICCCI 2021), 2021, 1463 : 101 - 113
  • [23] An approach to enhance performance of age invariant face recognition
    Dhamija, Ashutosh
    Dubey, R. B.
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2022, 43 (03) : 2347 - 2362
  • [24] Analysis of Age Invariant Face Recognition Efficiency Using Face Feature Vectors
    Hast, Anders
    Zhou, Yijie
    Lai, Congting
    Blohm, Ivar
    ROBOTICS, COMPUTER VISION AND INTELLIGENT SYSTEMS, ROBOVIS 2024, 2024, 2077 : 47 - 65
  • [25] A Maximum Entropy Feature Descriptor for Age Invariant Face Recognition
    Gong, Dihong
    Li, Zhifeng
    Tao, Dacheng
    Liu, Jianzhuang
    Li, Xuelong
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 5289 - 5297
  • [26] Implicit and Explicit Feature Purification for Age-Invariant Facial Representation Learning
    Xie, Jiu-Cheng
    Pun, Chi-Man
    Lam, Kin-Man
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 399 - 412
  • [27] Biased face patching approach for age invariant face recognition using convolutional neural network
    Nimbarte M.
    Bhoyar K.K.
    Nimbarte, Mrudula (mrudula_inimbarte@rediffmail.com), 1600, Inderscience Publishers, 29, route de Pre-Bois, Case Postale 856, CH-1215 Geneva 15, CH-1215, Switzerland (19): : 103 - 124
  • [28] An approach to enhance age invariant face recognition performance based on gender classification
    Nayak, Jyothi S.
    Indiramma, M.
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (08) : 5183 - 5191
  • [29] Modeling Self-Principal Component Analysis for Age Invariant Face Recognition
    Nayak, Jyothi S.
    Indiramma, M.
    Nagarathna, N.
    2012 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND COMPUTING RESEARCH (ICCIC), 2012, : 402 - 406
  • [30] A composite spatio-temporal modeling approach for age invariant face recognition
    Alvi, Fahad Bashir
    Pears, Russel
    EXPERT SYSTEMS WITH APPLICATIONS, 2017, 72 : 383 - 394