Human Action Recognition Using Deep Neural Networks

被引:0
作者
Koli, Rashmi R. [1 ]
Bagban, Tanveer, I [2 ]
机构
[1] DKTEs Text & Engn Inst, Dept Comp Sci & Engn, Ichalkaranji, Maharashtra, India
[2] DKTEs Text & Engn Inst, Dept Informat Technol, Ichalkaranji, Maharashtra, India
来源
PROCEEDINGS OF THE 2020 FOURTH WORLD CONFERENCE ON SMART TRENDS IN SYSTEMS, SECURITY AND SUSTAINABILITY (WORLDS4 2020) | 2020年
关键词
Human Action Recognition; Deaf and dump; CNN; Hand gesture;
D O I
10.1109/worlds450073.2020.9210345
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human activities such as body gestures are the most difficult and challenging things in deep neural networks. Human action recognition is nothing but the human gesture recognition. Gesture shows a movement of body parts that convey some meaningful message. Gestures are greater most suitable and natural to have interaction with systems(computer)for humans thus it builds a platform between humans and machines. Human activity recognition provides the platform to interact with the deaf and dumb person. In this research work, we introduce to develop a platform for hand movement recognition, which recognizes hand movement(gestures), by using the CNN, we can identify, human gestures in the image. As we are aware there was a quick increment in the amount of deaf and dumb victims because of a few conditions. Since deaf and dumb people can't communicate with a typical person so they want to depend upon a sort of visual correspondence Sign language. The sign language gives the fine verbal exchange platform for listening to impaired men or women to bring their minds and to have interaction with a normal character. The purpose of this research is a gadget that broadens the popularity of gestures, which can recognize gestures and then convert gesture images into text accordingly. The system pays special attention to the CNN training component using the CNN algorithm The concept includes designing a gadget that uses in-depth mastery standards to treat input as a gesture and then provide recognizable output as text.
引用
收藏
页码:376 / 380
页数:5
相关论文
共 8 条
[1]  
[Anonymous], 1997, P NAT C ART INT 9 C
[2]  
Dong C, 2015, IEEE COMPUT SOC CONF
[3]  
Hiremath Rashmi. B., 2016, INT J INNOVATIVE RES, V4
[4]   Natural language description of human activities from video images based on concept hierarchy of actions [J].
Kojima, A ;
Tamura, T ;
Fukunaga, K .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2002, 50 (02) :171-184
[5]  
Koller D., 1991, Proceedings 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (91CH2983-5), P90, DOI 10.1109/CVPR.1991.139667
[6]   Multiple/Single-View Human Action Recognition via Part-Induced Multitask Structural Learning [J].
Liu, An-An ;
Su, Yu-Ting ;
Jia, Ping-Ping ;
Gao, Zan ;
Hao, Tong ;
Yang, Zhao-Xuan .
IEEE TRANSACTIONS ON CYBERNETICS, 2015, 45 (06) :1194-1208
[7]  
Shinde Shweta S., 2016, IEEE C
[8]  
Venugopalan S., 2014, Translating videos to natural language using deep recurrent neural networks