Emotional Intelligence for the Decision-Making Process of Trajectories in Collaborative Robotics

被引:14
作者
Antonelli, Michele Gabrio [1 ]
Beomonte Zobel, Pierluigi [1 ]
Manes, Costanzo [2 ]
Mattei, Enrico [2 ]
Stampone, Nicola [1 ]
机构
[1] Univ Aquila, Dipartimento Ingn Ind & Informaz & Econ, Ple Pontieri Monteluco Roio, I-67100 Laquila, Italy
[2] Univ Aquila, Dipartimento Ingn & Sci Informaz & Matemat DISIM, Via Vetoio, I-67100 Laquila, Italy
关键词
emotional intelligence; vision transformer; collaborative robotics; digital twin; human-robot interaction; RECOGNITION; UNPLEASANT;
D O I
10.3390/machines12020113
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In collaborative robotics, to improve human-robot interaction (HRI), it is necessary to avoid accidental impacts. In this direction, several works reported how to modify the trajectories of collaborative robots (cobots), monitoring the operator's position in the cobot workspace by industrial safety devices, cameras, or wearable tracking devices. The detection of the emotional state of the operator could further prevent possible dangerous situations. This work aimed to increase the predictability of anomalous behavior on the part of human operators by the implementation of emotional intelligence (EI) that allows a cobot to detect the operator's Level of Attention (LoA), implicitly associated with the emotional state, and to decide the safest trajectory to complete a task. Consequently, the operator is induced to pay due attention, the safety rate of the HRI is improved, and the cobot downtime is reduced. The approach was based on a vision transformer (ViT) architecture trained and validated by the Level of Attention Dataset (LoAD), the ad hoc dataset created and developed on facial expressions and hand gestures. ViT was integrated into a digital twin of the Omron TM5-700 cobot, suitably developed within this project, and the effectiveness of the EI was tested on a pick-and-place task. Then, the proposed approach was experimentally validated with the physical cobot. The results of the simulation and experimentation showed that the goal of the work was achieved and the decision-making process can be successfully integrated into existing robot control strategies.
引用
收藏
页数:21
相关论文
共 54 条
[1]   Future of industry 5.0 in society: human-centric solutions, challenges and prospective research areas [J].
Adel, Amr .
JOURNAL OF CLOUD COMPUTING-ADVANCES SYSTEMS AND APPLICATIONS, 2022, 11 (01)
[2]  
Alexander K, 2024, Arxiv, DOI arXiv:2206.08219
[3]  
[Anonymous], 2016, Robots and robotic devices
[4]  
[Anonymous], omron industrial automation
[5]   Soft Pneumatic Helical Actuator for Collaborative Robotics [J].
Antonelli, Michele Gabrio ;
D'Ambrogio, Walter .
ADVANCES IN ITALIAN MECHANISM SCIENCE, IFTOMM ITALY 2022, 2022, 122 :702-709
[6]   Design Methodology for a Novel Bending Pneumatic Soft Actuator for Kinematically Mirroring the Shape of Objects [J].
Antonelli, Michele Gabrio ;
Zobel, Pierluigi Beomonte ;
D'Ambrogio, Walter ;
Durante, Francesco .
ACTUATORS, 2020, 9 (04) :1-20
[7]   Face-Based Attention Recognition Model for Children with Autism Spectrum Disorder [J].
Banire, Bilikis ;
Al Thani, Dena ;
Qaraqe, Marwa ;
Mansoor, Bilal .
JOURNAL OF HEALTHCARE INFORMATICS RESEARCH, 2021, 5 (04) :420-445
[8]   Attentional resources measured by reaction times highlight differences within pleasant and unpleasant, high arousing stimuli [J].
Buodo, G ;
Sarlo, M ;
Palomba, D .
MOTIVATION AND EMOTION, 2002, 26 (02) :123-138
[9]   ViTFER: Facial Emotion Recognition with Vision Transformers [J].
Chaudhari, Aayushi ;
Bhatt, Chintan ;
Krishna, Achyut ;
Mazzeo, Pier Luigi .
APPLIED SYSTEM INNOVATION, 2022, 5 (04)
[10]   Self-supervised vision transformer-based few-shot learning for facial expression recognition [J].
Chen, Xuanchi ;
Zheng, Xiangwei ;
Sun, Kai ;
Liu, Weilong ;
Zhang, Yuang .
INFORMATION SCIENCES, 2023, 634 :206-226