ArcFace: Additive Angular Margin Loss for Deep Face Recognition

被引:169
作者
Deng, Jiankang [1 ]
Guo, Jia [2 ]
Yang, Jing [3 ]
Xue, Niannan [1 ]
Kotsia, Irene [4 ]
Zafeiriou, Stefanos [1 ]
机构
[1] Imperial Coll London, Dept Comp, London SW7 2BX, England
[2] InsightFace, London SW7 2AZ, England
[3] Univ Nottingham, Dept Comp Sci, Nottingham NG7 2RD, England
[4] Cogitat, London W10 5YU, England
基金
英国工程与自然科学研究理事会;
关键词
Large-scale face recognition; additive angular margin; noisy labels; sub-class; model inversion;
D O I
10.1109/TPAMI.2021.3087709
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains K sub-centers and training samples only need to be close to any of the K positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.
引用
收藏
页码:5962 / 5979
页数:18
相关论文
共 121 条
  • [51] Mahendran A, 2015, PROC CVPR IEEE, P5188, DOI 10.1109/CVPR.2015.7299155
  • [52] On the Reconstruction of Face Images from Deep Face Templates
    Mai, Guangcan
    Cao, Kai
    Yuen, Pong C.
    Jain, Anil K.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (05) : 1188 - 1202
  • [53] Face-Specific Data Augmentation for Unconstrained Face Recognition
    Masi, Iacopo
    Anh Tuan Tran
    Hassner, Tal
    Sahin, Gozde
    Medioni, Gerard
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2019, 127 (6-7) : 642 - 667
  • [54] Deep Face Recognition: a Survey
    Masi, Iacopo
    Wu, Yue
    Hassner, Tal
    Natarajan, Prem
    [J]. PROCEEDINGS 2018 31ST SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 2018, : 471 - 478
  • [55] Do We Really Need to Collect Millions of Faces for Effective Face Recognition?
    Masi, Iacopo
    Anh Tuan Tran
    Hassner, Tal
    Leksut, Jatuporn Toy
    Medioni, Gerard
    [J]. COMPUTER VISION - ECCV 2016, PT V, 2016, 9909 : 579 - 596
  • [56] IARPA Janus Benchmark - C: Face Dataset and Protocol
    Maze, Brianna
    Adams, Jocelyn
    Duncan, James A.
    Kalka, Nathan
    Miller, Tim
    Otto, Charles
    Jain, Anil K.
    Niggel, W. Tyler
    Anderson, Janet
    Cheney, Jordan
    Grother, Patrick
    [J]. 2018 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), 2018, : 158 - 165
  • [57] Mordvintsev A., 2015, Google research blog
  • [58] AgeDB: the first manually collected, in-the-wild age database
    Moschoglou, Stylianos
    Papaioannou, Athanasios
    Sagonas, Christos
    Deng, Jiankang
    Kotsia, Irene
    Zafeiriou, Stefanos
    [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1997 - 2005
  • [59] No Fuss Distance Metric Learning using Proxies
    Movshovitz-Attias, Yair
    Toshev, Alexander
    Leung, Thomas K.
    Ioffe, Sergey
    Singh, Saurabh
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 360 - 368
  • [60] Ng HW, 2014, IEEE IMAGE PROC, P343, DOI 10.1109/ICIP.2014.7025068