An Immersive Human-Robot Interactive Game Framework Based on Deep Learning for Children's Concentration Training

被引:5
作者
Liu, Li [1 ]
Liu, Yangguang [2 ]
Gao, Xiao-Zhi [3 ]
Zhang, Xiaomin [1 ]
机构
[1] Ningbo Univ Finance & Econ, Coll Digital Technol & Engn, Ningbo 315175, Peoples R China
[2] Ningbo Univ Finance & Econ, Coll Finance & Informat, Ningbo 315175, Peoples R China
[3] Univ Eastern Finland, Sch Comp, Kuopio 70210, Finland
关键词
concentration training; human-robot interaction; gesture recognition; deep learning; EEG;
D O I
10.3390/healthcare10091779
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
In order to alleviate bottlenecks such as the lack of professional teachers, inattention during training processes,and low effectiveness in concentration training, we have proposed an immersive human-robot interactive (HRI) game framework based on deep learning for children's concentration training and demonstrated its use through human-robot interactive games based on gesture recognition. The HRI game framework includes four functional modules: video data acquisition, image recognition modeling, a deep learning algorithm (YOLOv5), and information feedback. First, we built a gesture recognition model containing 10,000 pictures of children's gestures, using the YOLOv5 algorithm. The average accuracy in recognition training was 98.7%. Second, we recruited 120 children with attention deficits (aged from 9 to 12 years) to play the HRI games, including 60 girls and 60 boys. In the HRI game experiment, we obtained 8640 sample data, which were normalized and processed.According to the results, we found that the girls had better visual short-term memory and a shorter response time than boys. The research results showed that HRI games had a high efficacy, convenience, and full freedom, making them appropriate for children's concentration training.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] "Who's a Good Robot?!" Designing Human-Robot Teaching Interactions Inspired by Dog Training
    Paci, Patrizia
    Tiddi, Ilaria
    Preciado, Daniel
    Baraka, Kim
    [J]. HHAI 2023: AUGMENTING HUMAN INTELLECT, 2023, 368 : 310 - 319
  • [42] Human-Robot Interaction for Surgical Robot Based on Fuzzy Model Reference Learning Control
    Lin A.
    Gan M.
    Ge H.
    Tang Y.
    Xu H.
    Kuang S.
    Huang L.
    Sun L.
    [J]. Jiqiren/Robot, 2019, 41 (04): : 543 - 550
  • [43] Robot behavior adaptation for human-robot interaction based on policy gradient reinforcement learning
    Mitsunaga, N
    Smith, C
    Kanda, T
    Ishiguro, H
    Hagita, N
    [J]. 2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 2005, : 1594 - 1601
  • [44] A Comparison of Machine Learning Techniques for Modeling Human-Robot Interaction with Children with Autism
    Short, Elaine
    Feil-Seifer, David
    Mataric, Maja
    [J]. PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011), 2011, : 251 - 252
  • [45] A Biological Inspired Cognitive Framework for Memory-Based Multi-Sensory Joint Attention in Human-Robot Interactive Tasks
    Eldardeer, Omar
    Gonzalez-Billandon, Jonas
    Grasse, Lukas
    Tata, Matthew
    Rea, Francesco
    [J]. FRONTIERS IN NEUROROBOTICS, 2021, 15
  • [46] A Theoretical Framework for Large-Scale Human-Robot Interaction with Groups of Learning Agents
    Teh, Nicholas
    Hu, Shuyue
    Soh, Harold
    [J]. HRI '21: COMPANION OF THE 2021 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2021, : 489 - 493
  • [47] Towards Multi-User Activity Recognition through Facilitated Training Data and Deep Learning for Human-Robot Collaboration Applications
    Semeraro, Francesco
    Carberry, Jon
    Cangelosi, Angelo
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [48] A Fast Real-time Facial Expression Classifier Deep Learning-based for Human-robot Interaction
    Putro, Muhamad Dwisnanto
    Nguyen, Duy-Linh
    Jo, Kang-Hyun
    [J]. 2021 21ST INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2021), 2021, : 988 - 993
  • [49] DL-DARE: Deep learning-based different activity recognition for the human-robot interaction environment
    Kansal, Sachin
    Jha, Sagar
    Samal, Prathamesh
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (16) : 12029 - 12037
  • [50] Robust real-time hand detection and localization for space human-robot interaction based on deep learning
    Gao, Qing
    Liu, Jinguo
    Ju, Zhaojie
    [J]. NEUROCOMPUTING, 2020, 390 : 198 - 206