Modularized Predictive Coding-Based Online Motion Synthesis Combining Environmental Constraints and Motion-Capture Data

被引:1
|
作者
Hwang, Jaepyung [1 ]
Ishii, Shin [1 ,2 ,3 ]
Kwon, Taesoo [4 ]
Oba, Shigeyuki [1 ]
机构
[1] Kyoto Univ, Grad Sch Informat, Dept Syst Sci, Kyoto 6068501, Japan
[2] Univ Tokyo, Inst Adv Study, Int Res Ctr Neurointelligence WPI IRCN, Tokyo 1130033, Japan
[3] Adv Telecommun Res Inst Int ATR, Seika 6190288, Japan
[4] Hanyang Univ, Dept Comp Software, Seoul 04763, South Korea
关键词
Combination of linear models; hybrid-based character animation; neuroscience-inspired; online motion synthesis; ANIMATION;
D O I
10.1109/ACCESS.2020.3036449
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Motion synthesis benefits from the use of motion capture data and a dynamic model because the motion data can provide a reference to naturalness, and the dynamic model can support environmental constraints such as footskate prevention or perturbation response. However, a combination of a dynamic model and captured motion usually demands professional insights, experience, and additional efforts such as preprocessing or off-line optimization. To address this issue, we propose a modularized predictive coding-based motion synthesis framework that synthesizes natural motion while maintaining the constraints. Modularized predictive coding provides intuitive online mediation of multiple information sources, which can then be applied to motion synthesis. To validate the proposed framework, we applied different types of motion data and character models to synthesize human walking, kickboxing, and backflipping motions, a dog walking motion, and a hand object-grasping motion.
引用
收藏
页码:202274 / 202285
页数:12
相关论文
共 1 条
  • [1] Robust and automatic motion-capture data recovery using soft skeleton constraints and model averaging
    Tits, Mickael
    Tilmanne, Joelle
    Dutoit, Thierry
    PLOS ONE, 2018, 13 (07):