Meta-evaluation for 3D Face Reconstruction Via Synthetic Data

被引:0
作者
Sariyanidi, Evangelos [1 ]
Ferrari, Claudio [2 ]
Berretti, Stefano [3 ]
Schultz, Robert T. [1 ,4 ]
Tunc, Birkan [1 ,4 ]
机构
[1] Childrens Hosp Philadelphia, Philadelphia, PA 19104 USA
[2] Univ Parma, Parma, Italy
[3] Univ Florence, Florence, Italy
[4] Univ Penn, Philadelphia, PA USA
来源
2023 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS, IJCB | 2023年
基金
美国国家卫生研究院;
关键词
SINGLE IMAGE;
D O I
10.1109/IJCB57857.2023.10448898
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The standard benchmark metric for 3D face reconstruction is the geometric error between reconstructed meshes and the ground truth. Nearly all recent reconstruction methods are validated on real ground truth scans, in which case one needs to establish point correspondence prior to error computation, which is typically done with the Chamfer (i.e., nearest neighbor) criterion. However, a simple yet fundamental question have not been asked: Is the Chamfer error an appropriate and fair benchmark metric for 3D face reconstruction? More generally, how can we determine which error estimator is a better benchmark metric? We present a meta-evaluation framework that uses synthetic data to evaluate the quality of a geometric error estimator as a benchmark metric for face reconstruction. Further, we use this framework to experimentally compare four geometric error estimators. Results show that the standard approach not only severely underestimates the error, but also does so inconsistently across reconstruction methods, to the point of even altering the ranking of the compared methods. Moreover, although non-rigid ICP leads to a metric with smaller estimation bias, it could still not correctly rank all compared reconstruction methods, and is significantly more time consuming than Chamfer. In sum, we show several issues present in the current benchmarking and propose a procedure using synthetic data to address these issues.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] A 3D Face Model for Pose and Illumination Invariant Face Recognition
    Paysan, Pascal
    Knothe, Reinhard
    Amberg, Brian
    Romdhani, Sami
    Vetter, Thomas
    [J]. AVSS: 2009 6TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE, 2009, : 296 - 301
  • [32] Semi-Supervised Monocular 3D Face Reconstruction With End-to-End Shape-Preserved Domain Transfer
    Piao, Jingtan
    Qian, Chen
    Li, Hongsheng
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 9397 - 9406
  • [33] 3D Shape Sequence of Human Comparison and Classification Using Current and Varifolds
    Pierson, Emery
    Daoudi, Mohamed
    Arguillere, Sylvain
    [J]. COMPUTER VISION - ECCV 2022, PT III, 2022, 13663 : 523 - 539
  • [34] H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction
    Ramon, Eduard
    Triginer, Gil
    Escur, Janna
    Pumarola, Albert
    Garcia, Jaime
    Giro-i-Nieto, Xavier
    Moreno-Noguer, Francesc
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 5600 - 5609
  • [35] 3D Face Reconstruction by Learning from Synthetic Data
    Richardson, Elad
    Sela, Matan
    Kimmel, Ron
    [J]. PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2016, : 460 - 467
  • [36] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction
    Ruan, Zeyu
    Zou, Changqing
    Wu, Longhai
    Wu, Gangshan
    Wang, Limin
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 5793 - 5806
  • [37] Fully automatic expression-invariant face correspondence
    Salazar, Augusto
    Wuhrer, Stefanie
    Shu, Chang
    Prieto, Flavio
    [J]. MACHINE VISION AND APPLICATIONS, 2014, 25 (04) : 859 - 879
  • [38] Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
    Sanyal, Soubhik
    Bolkart, Timo
    Feng, Haiwen
    Black, Michael J.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7755 - 7764
  • [39] Sariyanidi Evangelos, 2020, European Conf. on Computer Vision (ECCV), P2
  • [40] Sariyanidi Evangelos, 2022, TechRxiv, V4, P5