A Multimodal Human-Computer Interaction for Smart Learning System

被引:10
|
作者
Alzubi, Tareq Mahmod [1 ]
Alzubi, Jafar A. [2 ]
Singh, Ashish [3 ]
Alzubi, Omar A. [4 ]
Subramanian, Murali [1 ,4 ]
机构
[1] Al Balqa Appl Univ, Prince Abdullah Bin Ghazi Fac Informat & Commun Te, Amman, Jordan
[2] Al Balqa Appl Univ, Fac Engn, As Salt, Jordan
[3] KIIT Deemed Univ, Sch Comp Engn, Bhubaneswar, India
[4] Vellore Inst Technol, Sch Comp Sci & Engn, Vellore, India
关键词
Human-Computer Interaction; multimodal HCI; multilayer CNN; deep learning model; smart Learning System; DESIGN; RECOGNITION; MODEL;
D O I
10.1080/10447318.2023.2206758
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The rise of digitalization and computing devices has transformed the educational landscape, making traditional teaching methods less productive. In this context, early and continuous user interaction is crucial for designing and developing effective learning applications. The field of Human-Computer Interaction (HCI) has seen significant technological growth, enabling educators to provide quality educational services through smart input and output channels. However, to prevent students from discontinuing their studies and help them grow their careers, a multimodal HCI approach is needed. This paper proposes a multimodal deep learning multi-layer Convolutional Neural Network (CNN) to improve the educational experience. Our designed system aims to create a promising solution for improving the educational experience and enabling educators to provide high-quality educational services to students. Our implementation results show promising real-time performances, including a high success rate in a constriction learning concept, a quality interaction experience, and enhanced educational services. We evaluated the accuracy of five multimodal inputs, including Finger Touch (FT), Hands Up (HU), Hands Down (HD), Voice Command (VC), and Click/Typing (CT). The results indicate an average accuracy of 90.8%, 87%, 88.6%, 91.8%, and 87%, respectively, demonstrating the effectiveness of our proposed approach.
引用
收藏
页码:1718 / 1728
页数:11
相关论文
共 50 条
  • [41] Towards a Universal Human-Computer Interaction Model for Multimodal Interactions
    Faltaous, Sarah
    Gruenefeld, Uwe
    Schneegass, Stefan
    MENSCH AND COMPUTER 2021 (MUC 21), 2021, : 59 - 63
  • [42] Toward an affect-sensitive multimodal human-computer interaction
    Pantic, M
    Rothkrantz, LJM
    PROCEEDINGS OF THE IEEE, 2003, 91 (09) : 1370 - 1390
  • [43] InfoPlant: Multimodal augmentation of plants for enhanced human-computer interaction
    Hammerschmidt, Jan
    Hermann, Thomas
    Walender, Alex
    Kroemker, Niels
    2015 6TH IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFOCOMMUNICATIONS (COGINFOCOM), 2015, : 511 - 516
  • [44] Machine learning meets human-computer interaction
    Herrmann, J
    Moustakis, VS
    APPLIED ARTIFICIAL INTELLIGENCE, 1997, 11 (7-8) : R3 - R4
  • [45] Exploiting Unconscious User Signals in Multimodal Human-Computer Interaction
    Andre, Elisabeth
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2013, 9 (01)
  • [46] Modeling multimodal human-computer interaction: Semiotics, proxemics and kinesics
    Marshall, R
    DESIGN OF COMPUTING SYSTEMS: SOCIAL AND ERGONOMIC CONSIDERATIONS, 1997, 21 : 671 - 674
  • [47] Multimodal human-computer interfaces
    Dutoit, Thierry
    Nigay, Laurence
    Schnaider, Michael
    SIGNAL PROCESSING, 2006, 86 (12) : 3515 - 3517
  • [48] Deep Learning for Intelligent Human-Computer Interaction
    Lv, Zhihan
    Poiesi, Fabio
    Dong, Qi
    Lloret, Jaime
    Song, Houbing
    APPLIED SCIENCES-BASEL, 2022, 12 (22):
  • [49] Cognitive status and form of reference in multimodal human-computer interaction
    Kehler, A
    SEVENTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-2001) / TWELFTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE (IAAI-2000), 2000, : 685 - 690
  • [50] Human-computer interaction in the PeLoTe rescue system
    Driewer, F
    Schilling, K
    Baier, H
    2005 IEEE INTERNATIONAL WORKSHOP ON SAFETY, SECURITY AND RESCUE ROBOTS, 2005, : 224 - 229