Affect Recognition in Hand-Object Interaction Using Object-Sensed Tactile and Kinematic Data

被引:6
作者
Niewiadomski, Radoslaw [1 ,2 ]
Beyan, Cigdem [3 ]
Sciutti, Alessandra [4 ]
机构
[1] Univ Trento, Dept Psychol & Cognit Sci, I-38068 Rovereto, Italy
[2] Ist Italiano Tecnol, COgNiT Architecture Collaborat Technol CONTACT Uni, I-16163 Genoa, Italy
[3] Univ Trento, Dept Informat Engn & Comp, I-38122 Trento, Italy
[4] Ist Italiano Tecnol, CONTACT Unit, I-16163 Genoa, Italy
基金
欧洲研究理事会;
关键词
Task analysis; Grasping; Kinematics; Sensors; Human-robot interaction; Shape; Feature extraction; Affective touch; emotion classification; hand-object interaction; vitality forms; tactile data; EMOTIONS; TOUCH;
D O I
10.1109/TOH.2022.3230643
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We investigate the recognition of the affective states of a person performing an action with an object, by processing the object-sensed data. We focus on sequences of basic actions such as grasping and rotating, which are constituents of daily-life interactions. iCube, a 5 cm cube, was used to collect tactile and kinematics data that consist of tactile maps (without information on the pressure applied to the surface), and rotations. We conduct two studies: classification of i) emotions and ii) the vitality forms. In both, the participants perform a semi-structured task composed of basic actions. For emotion recognition, 237 trials by 11 participants associated with anger, sadness, excitement, and gratitude were used to train models using 10 hand-crafted features. The classifier accuracy reaches up to 82.7%. Interestingly, the same classifier when learned exclusively with the tactile data performs on par with its counterpart modeled with all 10 features. For the second study, 1135 trials by 10 participants were used to classify two vitality forms. The best-performing model differentiated gentle actions from rude ones with an accuracy of 84.85%. The results also confirm that people touch objects differently when performing these basic actions with different affective states and attitudes.
引用
收藏
页码:112 / 117
页数:6
相关论文
共 39 条
  • [1] Alpaydin E., 2008, Proceedings of the 25th International Conference on Machine Learning, P352, DOI [DOI 10.1145/1390156.1390201, 10.1145/1390156.1390201]
  • [2] Affective Touch in Human-Robot Interaction: Conveying Emotion to the Nao Robot
    Andreasson, Rebecca
    Alenljung, Beatrice
    Billing, Erik
    Lowe, Robert
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2018, 10 (04) : 473 - 491
  • [3] Investigation of Small Group Social Interactions Using Deep Visual Activity-Based Nonverbal Features
    Beyan, Cigdem
    Shahid, Muhammad
    Murino, Vittorio
    [J]. PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 311 - 319
  • [4] Moving as a Leader: Detecting Emergent Leadership in Small Groups using Body Pose
    Beyan, Cigdem
    Katsageorgiou, Vasiliki-Maria
    Murino, Vittorio
    [J]. PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 1425 - 1433
  • [5] Prediction of the Leadership Style of an Emergent Leader Using Audio and Visual Nonverbal Features
    Beyan, Cigdem
    Capozzi, Francesca
    Becchio, Cristina
    Murino, Vittorio
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2018, 20 (02) : 441 - 456
  • [6] Castellano G, 2007, LECT NOTES COMPUT SC, V4738, P71
  • [7] Cooney MD, 2012, IEEE INT C INT ROBOT, P1420, DOI 10.1109/IROS.2012.6385956
  • [8] The neural correlates of 'vitality form' recognition: an fMRI study
    Di Cesare, Giuseppe
    Di Dio, Cinzia
    Rochat, Magali J.
    Sinigaglia, Corrado
    Bruschweiler-Stern, Nadia
    Stern, Daniel N.
    Rizzolatti, Giacomo
    [J]. SOCIAL COGNITIVE AND AFFECTIVE NEUROSCIENCE, 2014, 9 (07) : 951 - 960
  • [9] What Does Touch Tell Us about Emotions in Touchscreen-Based Gameplay?
    Gao, Yuan
    Bianchi-Berthouze, Nadia
    Meng, Hongying
    [J]. ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION, 2012, 19 (04)
  • [10] Social Touch Gesture Recognition using Random Forest and Boosting on Distinct Feature Sets
    Gaus, Yona Falinie A.
    Olugbade, Temitayo
    Jan, Asim
    Qin, Rui
    Liu, Jingxin
    Zhang, Fan
    Meng, Hongying
    Bianchi-Berthouze, Nadia
    [J]. ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2015, : 399 - 406