StairNet: visual recognition of stairs for human-robot locomotion

被引:3
|
作者
Kurbis, Andrew Garrett [1 ,3 ]
Kuzmenko, Dmytro [4 ]
Ivanyuk-Skulskiy, Bogdan [4 ]
Mihailidis, Alex [1 ,3 ]
Laschowski, Brokoslaw [2 ,3 ,5 ]
机构
[1] Univ Toronto, Inst Biomed Engn, Toronto, ON, Canada
[2] Univ Toronto, Robot Inst, Toronto, ON, Canada
[3] Toronto Rehabil Inst, KITE Res Inst, Toronto, ON, Canada
[4] Natl Univ Kyiv, Mohyla Acad, Dept Math, Kiev, Ukraine
[5] Univ Toronto, Dept Mech & Ind Engn, Toronto, ON, Canada
关键词
Computer vision; Deep learning; Wearable robotics; Prosthetics; Exoskeletons;
D O I
10.1186/s12938-024-01216-0
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Human-robot walking with prosthetic legs and exoskeletons, especially over complex terrains, such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. This motivated us to develop the StairNet initiative to support the development of new deep learning models for visual perception of real-world stair environments. In this study, we present a comprehensive overview of the StairNet initiative and key research to date. First, we summarize the development of our large-scale data set with over 515,000 manually labeled images. We then provide a summary and detailed comparison of the performances achieved with different algorithms (i.e., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks), training methods (i.e., supervised learning with and without temporal data, and semi-supervised learning with unlabeled images), and deployment methods (i.e., mobile and embedded computing), using the StairNet data set. Finally, we discuss the challenges and future directions. To date, our StairNet models have consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model accuracy and size. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. In comparison, when deployed on our custom-designed CPU-powered smart glasses, our models yielded slower inference speeds of 1.5 s, presenting a trade-off between human-centered design and performance. Overall, the results of numerous experiments presented herein provide consistent evidence that StairNet can be an effective platform to develop and study new deep learning models for visual perception of human-robot walking environments, with an emphasis on stair recognition. This research aims to support the development of next-generation vision-based control systems for robotic prosthetic legs, exoskeletons, and other mobility assistive technologies.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] StairNet: visual recognition of stairs for human–robot locomotion
    Andrew Garrett Kurbis
    Dmytro Kuzmenko
    Bogdan Ivanyuk-Skulskiy
    Alex Mihailidis
    Brokoslaw Laschowski
    BioMedical Engineering OnLine, 23
  • [2] Humanoid Robot Locomotion Control by Posture Recognition for Human-Robot Interaction
    Gao, Xinyi
    Zheng, Minhua
    Meng, Max Q. -H.
    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2015, : 1572 - 1577
  • [3] Visual recognition of pointing gestures for human-robot interaction
    Nickel, Kai
    Stiefelhagen, Rainer
    IMAGE AND VISION COMPUTING, 2007, 25 (12) : 1875 - 1884
  • [4] Visual Diver Recognition for Underwater Human-Robot Collaboration
    Xia, Youya
    Sattar, Junaed
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 6839 - 6845
  • [5] Development and Mobile Deployment of a Stair Recognition System for Human-Robot Locomotion
    Kurbis, Andrew Garrett
    Mihailidis, Alex
    Laschowski, Brokoslaw
    IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2024, 6 (01): : 271 - 280
  • [6] Visual human-robot interaction
    Heinzmann, J
    Zelinsky, A
    2001 INTERNATIONAL WORKSHOP ON BIO-ROBOTICS AND TELEOPERATION, PROCEEDINGS, 2001, : 113 - 118
  • [7] Controlling the human-robot interaction for robotic rehabilitation of locomotion
    Jezernik, S
    Morari, M
    7TH INTERNATIONAL WORKSHOP ON ADVANCED MOTION CONTROL, PROCEEDINGS, 2002, : 133 - 135
  • [8] Human Posture Recognition for Human-Robot Interaction
    Wei, Shiheng
    Jiang, Wei
    2011 3RD WORLD CONGRESS IN APPLIED COMPUTING, COMPUTER SCIENCE, AND COMPUTER ENGINEERING (ACC 2011), VOL 4, 2011, 4 : 305 - 310
  • [9] Visual Surveillance for Human-Robot Interaction
    Martinez-Martin, Ester
    del Pobil, Angel P.
    PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2012, : 3333 - 3338
  • [10] Clustering in image space for place recognition and visual annotations for human-robot interaction
    Martínez, AM
    Vitrià, J
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2001, 31 (05): : 669 - 682