Application dependable interaction module for computer vision-based human-computer interactions

被引:6
作者
Al-Ma'aitah, Mohammed [1 ]
Alwadain, Ayed [1 ]
Saad, Aldosary [1 ]
机构
[1] King Saud Univ, Community Coll, Comp Sci Dept, Riyadh, Saudi Arabia
关键词
Computer vision; Deep belief networks; Human-computer interaction; Interaction behavior; Multimodal processing; HUMAN INTERACTION RECOGNITION; SYSTEM;
D O I
10.1016/j.compeleceng.2021.107553
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Human-computer interaction (HCI) is a prominent development that provides autonomous and pervasive service broadcasting for in-person communications. Computer vision techniques are assimilated with HCI for human detection and object classification in autonomous communication environments. This article introduces an application dependable interaction module (ADIM) that is optimal in recognizing humans and other autonomous systems for communication. This recognition helps to precisely detect the application demands of the user/system for flawless service broadcast. In this process, a deep belief network is used for the persistent analysis of the behavior of the system/human in the interacting end. The behavior is characterized using touch, voice, and commands/requests for identifying the object at the initial stage. The metrics sharing delay, response latency, interaction failures, recognition ratio, and error are employed to verify the proposed module's performance.
引用
收藏
页数:16
相关论文
共 24 条
[1]   Character Recognition in Air-Writing Based on Network of Radars for Human-Machine Interface [J].
Arsalan, Muhammad ;
Santra, Avik .
IEEE SENSORS JOURNAL, 2019, 19 (19) :8855-8864
[2]   A dynamic and interoperable communication framework for controlling the operations of wearable sensors in smart healthcare applications [J].
Baskar, S. ;
Shakeel, P. Mohamed ;
Kumar, R. ;
Burhanuddin, M. A. ;
Sampath, R. .
COMPUTER COMMUNICATIONS, 2020, 149 :17-26
[3]   Computer-vision based second-order (kinetic-color) data generation: arsenic quantitation in natural waters [J].
Belen, Federico ;
Danilo Vallese, Federico ;
Leist, Lisa G. T. ;
Ferrao, Marco Flores ;
Gomes, Adriano de Araujo ;
Fabian Pistonesi, Marcelo .
MICROCHEMICAL JOURNAL, 2020, 157
[4]   Automated multi-feature human interaction recognition in complex environment [J].
Bibi, Shafina ;
Anjum, Nadeem ;
Sher, Muhammad .
COMPUTERS IN INDUSTRY, 2018, 99 :282-293
[5]   A data-driven programming of the human-computer interactions for modeling a collaborative manufacturing system of hypoid gears by considering both geometric and physical performances [J].
Ding, Han ;
Wan, Zhigang ;
Zhou, Yuansheng ;
Tang, Jinyuan .
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2018, 51 :121-138
[6]   Interactive-Control-Model for Human-Computer Interactive System Based on Petri Nets [J].
Ding, Zhijun ;
Qiu, Haojie ;
Yang, Ru ;
Jiang, Changjun ;
Zhou, MengChu .
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2019, 16 (04) :1800-1813
[7]   Vision-Based Online Adaptation of Motion Primitives to Dynamic Surfaces: Application to an Interactive Robotic Wiping Task [J].
Dometios, Athanasios C. ;
Zhou, You ;
Papageorgiou, Xanthi S. ;
Tzafestas, Costas S. ;
Asfour, Tamim .
IEEE Robotics and Automation Letters, 2018, 3 (03) :1410-1417
[8]   A Novel Natural Mobile Human-Machine Interaction Method With Augmented Reality [J].
Du, Guanglong ;
Zhang, Bo ;
Li, Chunquan ;
Yuan, Hua .
IEEE ACCESS, 2019, 7 :154317-154330
[9]   Gesture Recognition Based on CNN and DCGAN for Calculation and Text Output [J].
Fang, Wei ;
Ding, Yewen ;
Zhang, Feihong ;
Sheng, Jack .
IEEE ACCESS, 2019, 7 :28230-28237
[10]   Human interaction recognition using spatial-temporal salient feature [J].
Hu, Tao ;
Zhu, Xinyan ;
Wang, Shaohua ;
Duan, Lian .
MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (20) :28715-28735