Thigh Motion-Based Gait Analysis for Human Identification using Inertial Measurement Units (IMUs)

被引:0
作者
Asuncion, Lloyd Vincent R. [1 ]
De Mesa, Joan Xyrel P. [1 ]
Juan, Patrick Kyle H. [1 ]
Sayson, Nathaniel T. [1 ]
Dela Cruz, Angelo R. [1 ]
机构
[1] Univ Santo Tomas, Fac Engn, Dept Elect Engn, Manila, Philippines
来源
2018 IEEE 10TH INTERNATIONAL CONFERENCE ON HUMANOID, NANOTECHNOLOGY, INFORMATION TECHNOLOGY, COMMUNICATION AND CONTROL, ENVIRONMENT AND MANAGEMENT (HNICEM) | 2018年
关键词
gait analysis; Euler angle; Inertial Measurement Unit; machine learning;
D O I
暂无
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Data security is an increasing concern due to the rapid pace of technological development and Internet of Things (IoT) implementation today. Mobile smartphones in particular is becoming a common place in the handling of sensitive information, leaving these devices vulnerable to data breaches. Biometric authentication is a viable alternative to current mobile phone security methods due to it being inherent to an individual. One biometric authentication parameter of active interest is the human gait. Sensor-based gait identification, in particular, is widely being researched due to the advantages of motion sensors being portable, wearable, and able to capture 3D motion. In this study, the researchers emulate a smartphone's IMU using two sensors that are simultaneously placed on the right and left thigh of 10 volunteers, of ages 2026, emulating the two most common placements of a smartphone. The acquired gait data from the IMUs, pitch, roll, and yaw angles, of the volunteers are the variables of this study. This study demonstrates the potential of human gait in biometric authentication with a Convolutional Neural Network gait identification algorithm. The algorithm is applied on 4 datasets, 3 of which are single-parameter datasets and 1 comprising of all the three parameters, roll, pitch, and yaw. For both left and right thigh data, the highest classification accuracy (98.34%) and precision (98.42%) were yielded by the three-parameter dataset, followed by the dataset comprising only of yaw parameter with a highest yielded average accuracy of 93.02% and an average yielded precision of 93.82%. The elapsed time during the training of each dataset is also recorded. The CNN training duration of the three-parameter dataset took almost 3.6 times longer than that of a single-parameter dataset.
引用
收藏
页数:6
相关论文
共 30 条
[1]  
Aljawarneh S. A., ONLINE BANKING SECUR, P90
[2]  
[Anonymous], 2018, GLOB MOB MARK REP, P12
[3]  
[Anonymous], 2018, CONV NEUR NETW PYTH
[4]  
[Anonymous], 2017, GROW THREAT MOB DEV
[5]  
Anwar M., 2015, MAICS, P13
[6]  
Boyd JE, 2005, LECT NOTES COMPUT SC, V3161, P19
[7]  
Chan H., 2013, 9 INT C NAT COMP ICN
[8]  
Daugman J, 2005, HANDBOOK OF IMAGE AND VIDEO PROCESSING, 2ND EDITION, P1251, DOI 10.1016/B978-012119792-6/50133-9
[9]  
Derawi M. O., 2010, NORW INF SEC C NOV
[10]  
DUHAYLUNGSOD CRE, 2017, I C HUMANOID NANOTEC