Objective Evaluation Metric for Motion Generative Models: Validating Frechet Motion Distance on Foot Skating and Over-smoothing Artifacts

被引:1
作者
Maiorca, Antoine [1 ]
Bohy, Hugo [1 ]
Yoon, Youngwoo [2 ]
Dutoit, Thierry [1 ]
机构
[1] Univ Mons, ISIA Lab, Mons, Hainaut, Belgium
[2] Elect & Telecommun Res Inst, Daejeon, South Korea
来源
15TH ANNUAL ACM SIGGRAPH CONFERENCE ON MOTION, INTERACTION AND GAMES, MIG 2023 | 2023年
关键词
Objective evaluation metric; Motion-generative model evaluation; Frechet motion distance;
D O I
10.1145/3623264.3624443
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nowadays, Deep Learning-powered generative models are able to generate new synthetic samples nearly indistinguishable from natural data. The development of such systems necessarily involves the design of evaluation protocols to assess their performance. Quantitative objective metrics, such as Frechet distance, in addition to human-centered subjective surveys, have become a standard for evaluating generative algorithms. Although motion generation is a popular research field, only a few works addressed the problem of the design and validation of a robust objective evaluation metric for motion-generative models. These previous works proposed to degrade ground truth motion samples with synthetic noises (e.g., Gaussian, Salt& Pepper) and studied the behavior of the proposed metric. However, this degradation does not mimic common motion artifacts produced by generative models. In this work, we propose (1) to validate Frechet distance-based objective metrics on motion datasets degraded by two realistic motion artifacts, foot skating and over-smoothing, often found in motion synthesis results, and (2) a Frechet Motion Distance (FMD), using Transformer-based feature extractor, able to capture the motion artifacts and also robust towards the variation of motion length.
引用
收藏
页数:11
相关论文
共 86 条
[1]   Skeleton-Aware Networks for Deep Motion Retargeting [J].
Aberman, Kfir ;
Li, Peizhuo ;
Lischinski, Dani ;
Sorkine-Hornung, Olga ;
Cohen-Or, Daniel ;
Chen, Baoquan .
ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04)
[2]   Unpaired Motion Style Transfer from Video to Animation [J].
Aberman, Kfir ;
Weng, Yijia ;
Lischinski, Dani ;
Cohen-Or, Daniel ;
Chen, Baoquan .
ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04)
[3]  
Aghajanyan A, 2020, Arxiv, DOI arXiv:2008.03156
[4]   Graph Regularized Autoencoder and its Application in Unsupervised Anomaly Detection [J].
Ahmed, Imtiaz ;
Galoppo, Travis ;
Hu, Xia ;
Ding, Yu .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (08) :4110-4124
[5]  
Aksan E, 2021, Arxiv, DOI arXiv:2004.08692
[6]   ChoreoGraph: Music-conditioned Automatic Dance Choreography over a Style and Tempo Consistent Dynamic Graph [J].
Au, Ho Yin ;
Chen, Jie ;
Jiang, Junkun ;
Guo, Yike .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, :3917-3925
[7]   MAE-AST: Masked Autoencoding Audio Spectrogram Transformer [J].
Baade, Alan ;
Peng, Puyuan ;
Harwath, David .
INTERSPEECH 2022, 2022, :2438-2442
[8]  
Baevski A, 2020, ADV NEUR IN, V33
[9]  
Bao H., 2021, arXiv, DOI DOI 10.48550/ARXIV.2106.08254
[10]  
Beltagy I, 2020, Arxiv, DOI arXiv:2004.05150