EMOKINE: A software package and computational framework for scaling up the creation of highly controlled emotional full-body movement datasets

被引:0
|
作者
Christensen, Julia F. [1 ]
Fernandez, Andres [2 ,3 ]
Smith, Rebecca A. [4 ]
Michalareas, Georgios [1 ]
Yazdi, Sina H. N. [5 ]
Farahi, Fahima [5 ]
Schmidt, Eva-Madeleine [1 ,6 ,11 ]
Bahmanian, Nasimeh [7 ,8 ]
Roig, Gemma [9 ,10 ]
机构
[1] Max Planck Inst Empir Aesthet, Dept Cognit Neuropsychol, Frankfurt, Germany
[2] Univ Tubingen, Methods Machine Learning, Tubingen, Germany
[3] Int Max Planck Res Sch Intelligent Syst, Tubingen, Germany
[4] Univ Glasgow, Dept Psychol, Glasgow, Scotland
[5] WiseWorld AI, Porto, Portugal
[6] Max Planck Sch Cognit, Leipzig, Germany
[7] Max Planck Inst Empir Aesthet, Dept Language & Literature, Frankfurt, Germany
[8] Goethe Univ, Dept Modern Languages, Frankfurt, Germany
[9] Goethe Univ, Comp Sci Dept, Frankfurt, Germany
[10] Hessian Ctr Artificial Intelligence Hessian AI, Darmstadt, Germany
[11] Max Planck Inst Human Dev, Ctr Humans & Machines, Berlin, Germany
基金
英国经济与社会研究理事会;
关键词
Emotion; Motion capture; Computer vision; Affective neuroscience; Aesthetics; Dance; Dataset; Open science; POINT-LIGHT DISPLAYS; BIOLOGICAL-MOTION; BODILY EXPRESSION; EFFORT-SHAPE; RECOGNITION; PERCEPTION; DANCE; ABILITY; NEUROBIOLOGY; PERSONALITY;
D O I
10.3758/s13428-024-02433-0
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
EMOKINE is a software package and dataset creation suite for emotional full-body movement research in experimental psychology, affective neuroscience, and computer vision. A computational framework, comprehensive instructions, a pilot dataset, observer ratings, and kinematic feature extraction code are provided to facilitate future dataset creations at scale. In addition, the EMOKINE framework outlines how complex sequences of movements may advance emotion research. Traditionally, often emotional-'action'-based stimuli are used in such research, like hand-waving or walking motions. Here instead, a pilot dataset is provided with short dance choreographies, repeated several times by a dancer who expressed different emotional intentions at each repetition: anger, contentment, fear, joy, neutrality, and sadness. The dataset was simultaneously filmed professionally, and recorded using XSENS (R) motion capture technology (17 sensors, 240 frames/second). Thirty-two statistics from 12 kinematic features were extracted offline, for the first time in one single dataset: speed, acceleration, angular speed, angular acceleration, limb contraction, distance to center of mass, quantity of motion, dimensionless jerk (integral), head angle (with regards to vertical axis and to back), and space (convex hull 2D and 3D). Average, median absolute deviation (MAD), and maximum value were computed as applicable. The EMOKINE software is appliable to other motion-capture systems and is openly available on the Zenodo Repository. Releases on GitHub include: (i) the code to extract the 32 statistics, (ii) a rigging plugin for Python for MVNX file-conversion to Blender format (MVNX=output file XSENS (R) system), and (iii) a Python-script-powered custom software to assist with blurring faces; latter two under GPLv3 licenses.
引用
收藏
页码:7498 / 7542
页数:45
相关论文
empty
未找到相关数据