Physics-based keyframe selection for human motion summarization

被引:15
|
作者
Voulodimos, Athanasios [1 ]
Rallis, Ioannis [2 ]
Doulamis, Nikolaos [2 ]
机构
[1] Univ West Attica, Dept Informat & Comp Engn, Agiou Spyridonos Str, Athens 12243, Greece
[2] Natl Tech Univ Athens, 9 Heroon Polytech Str, GR-15773 Athens, Greece
基金
欧盟地平线“2020”;
关键词
Motion capture data; Motion summarization; Kinematics; 3D; Keyframe selection; Dance analysis; CAPTURE DATA; RETRIEVAL; EXTRACTION;
D O I
10.1007/s11042-018-6935-z
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Analysis of human motion is a field of research that attracts significant interest because of the wide range of associated application domains. Intangible Cultural Heritage (ICH), including the performing arts and in particular dance, is one of the domains where related research is especially useful and challenging. Effective keyframe selection from motion sequences can provide an abstract and compact representation of the semantic information encoded therein, contributing towards useful functionality, such as fast browsing, matching and indexing of ICH content. The availability of powerful 3D motion capture sensors along with the fact that video summarization techniques are not always applicable to the particular case of dance movement create the need for effective and efficient summarization techniques for keyframe selection from 3D human motion capture data sequences. In this paper, we introduce two techniques: a "time-independent" method based on k-means++ clustering algorithm for the extraction of prominent representative instances of a dance, and a physics-based technique that creates temporal summaries of the sequence at different levels of detail. The proposed methods are evaluated on two dance motion datasets and show promising results.
引用
收藏
页码:3243 / 3259
页数:17
相关论文
共 50 条
  • [1] Physics-based keyframe selection for human motion summarization
    Athanasios Voulodimos
    Ioannis Rallis
    Nikolaos Doulamis
    Multimedia Tools and Applications, 2020, 79 : 3243 - 3259
  • [2] Keyframe Selection from Motion Capture Sequences with Graph based Deep Reinforcement Learning
    Mo, Clinton
    Hu, Kun
    Mei, Shaohui
    Chen, Zebin
    Wang, Zhiyong
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 5194 - 5202
  • [3] Optimal and interactive keyframe selection for motion capture
    Roberts, Richard
    Lewis, J. P.
    Anjyo, Ken
    Seo, Jaewoo
    Seol, Yeongho
    COMPUTATIONAL VISUAL MEDIA, 2019, 5 (02) : 171 - 191
  • [4] Keypoint-Based Keyframe Selection
    Guan, Genliang
    Wang, Zhiyong
    Lu, Shiyang
    Da Deng, Jeremiah
    Feng, David Dagan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2013, 23 (04) : 729 - 734
  • [5] MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting
    Tessler, Chen
    Guo, Yunrong
    Nabati, Ofir
    Chechik, Gal
    Peng, Xue Bin
    ACM TRANSACTIONS ON GRAPHICS, 2024, 43 (06):
  • [6] Keyframe Extraction for Human Motion Capture Data Based on Joint Kernel Sparse Representation
    Xia, Guiyu
    Sun, Huaijiang
    Niu, Xiaoqing
    Zhang, Guoqing
    Feng, Lei
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2017, 64 (02) : 1589 - 1599
  • [7] Keyframe Extraction from Human Motion Capture Data Based on a Multiple Population Genetic Algorithm
    Zhang, Qiang
    Zhang, Shulu
    Zhou, Dongsheng
    SYMMETRY-BASEL, 2014, 6 (04): : 926 - 937
  • [8] A Physics-Based Longitudinal Driver Model for Automated Vehicles
    Rahman, Mizanur
    Islam, Md Rafiul
    Chowdhury, Mashrur
    Khan, Taufiquar
    IEEE ACCESS, 2022, 10 : 80883 - 80899
  • [9] A Physics-Based Compact Model of SiC Power MOSFETs
    Kraus, Rainer
    Castellazzi, Alberto
    IEEE TRANSACTIONS ON POWER ELECTRONICS, 2016, 31 (08) : 5863 - 5870
  • [10] Temporal segmentation and keyframe selection methods for user-generated video search-based annotation
    Gonzalez-Diaz, Ivan
    Martinez-Cortes, Tomas
    Gallardo-Antolin, Ascension
    Diaz-de-Maria, Fernando
    EXPERT SYSTEMS WITH APPLICATIONS, 2015, 42 (01) : 488 - 502