A Manual Assembly Virtual Training System With Automatically Generated Augmented Feedback: Using the Comparison of Digitized Operator's Skill

被引:0
作者
Singhaphandu, Raveekiat [1 ,2 ]
Pannakkong, Warut [2 ]
Huynh, Van-Nam [1 ]
Boonkwan, Prachya [3 ]
机构
[1] Japan Adv Inst Sci & Technol, Sch Knowledge Sci, Ishikawa 9231211, Japan
[2] Thammasat Univ, Sirindhorn Int Inst Technol, Sch Mfg Syst & Mech Engn, Pathum Thani 12120, Thailand
[3] Natl Elect & Comp Technol Ctr, Pathum Thani 12120, Thailand
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Training; Assembly; Task analysis; Motors; Production; Color; Visualization; Industrial engineering; Pose estimation; Virtual environments; Fourth Industrial Revolution; Industrial training; deep learning; digital twins; pose estimation; computer vision; manual assembly; virtual training; augmented feedback; industry; 4.0; skill assessment; ACTION RECOGNITION; COMPOSITE LAYUP; REALITY; TIME; MAINTENANCE; NETWORKS; MODEL;
D O I
10.1109/ACCESS.2024.3436910
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In general, an experienced operator or expert must deliver face-to-face industrial manual assembly (I-MA) training. There is a limited number and availability of expert trainers, making training less accessible to trainees. Industrial virtual training systems (I-VTS) from research communities and commercial systems are focused on providing task demonstrations with rich multimedia immersive content. These systems can significantly lessen the reliance on the expert, but they still require an expert to observe and provide feedback. This study presents the "EXpert Independent Manual AsseMbly Virtual Trainer" (EXAMINER), addressing these limitations by integrating vision-based digitization of I-MA skills, comparison of digitized operators, and providing augmented feedback assessing the training outcomes automatically. This approach enhances trainee access to training without expert dependency. The study discusses the rationale and implementation process for the EXAMINER framework and a simulated case study evaluating the EXAMINER framework's ability to digitize and provide appropriate extrinsic augmented terminal feedback based on participant performance, marking a significant advancement in industrial training technology.
引用
收藏
页码:133356 / 133391
页数:36
相关论文
共 85 条
  • [71] Multi-modal augmented-reality assembly guidance based on bare-hand interface
    Wang, X.
    Ong, S. K.
    Nee, A. Y. C.
    [J]. ADVANCED ENGINEERING INFORMATICS, 2016, 30 (03) : 406 - 421
  • [72] An augmented reality training platform for assembly and maintenance skills
    Webel, Sabine
    Bockholt, Uli
    Engelke, Timo
    Gavish, Nirit
    Olbrich, Manuel
    Preusche, Carsten
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2013, 61 (04) : 398 - 403
  • [73] Performance monitoring and evaluation in dance teaching with mobile sensing technology
    Wei, Yu
    Yan, Hongli
    Bie, Rongfang
    Wang, Shenling
    Sun, Limin
    [J]. PERSONAL AND UBIQUITOUS COMPUTING, 2014, 18 (08) : 1929 - 1939
  • [74] Comparing HMD-based and Paper-based Training
    Werrlich, Stefan
    Daniel, Austino
    Ginger, Alexandra
    Phuc-Anh Nguyen
    Notni, Gunther
    [J]. PROCEEDINGS OF THE 2018 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR), 2018, : 134 - 142
  • [75] Intelligent augmented reality training for motherboard assembly
    Westerfield G.
    Mitrovic A.
    Billinghurst M.
    [J]. International Journal of Artificial Intelligence in Education, 2015, 25 (1) : 157 - 172
  • [76] Winter D., 2009, Biomechanics and Motor Control of Human Movement, V4th
  • [77] Wolfartsberger J., 2019, Int. J. Mech. Ind. Aerosp. Sci., V13, P107
  • [78] Xiu Y., 2018, P BMVC
  • [79] InnoHAR: A Deep Neural Network for Complex Human Activity Recognition
    Xu, Cheng
    Chai, Duo
    He, Jie
    Zhang, Xiaotong
    Duan, Shihong
    [J]. IEEE ACCESS, 2019, 7 : 9893 - 9902
  • [80] The evolution of production systems from Industry 2.0 through Industry 4.0
    Yin, Yong
    Stecke, Kathryn E.
    Li, Dongni
    [J]. INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2018, 56 (1-2) : 848 - 861