Deep-Learning-Based Context-Aware Multi-Level Information Fusion Systems for Indoor Mobile Robots Safe Navigation

被引:4
作者
Jia, Yin [1 ]
Ramalingam, Balakrishnan [1 ]
Mohan, Rajesh Elara [1 ]
Yang, Zhenyuan [1 ]
Zeng, Zimou [1 ]
Veerajagadheswar, Prabakaran [1 ]
机构
[1] Singapore Univ Technol & Design SUTD, Engn Prod Dev Pillar, Singapore 487372, Singapore
关键词
autonomous mobile robot; environment recognition; DCNN; image classification; contextual features; supervised learning; hazardous object detection;
D O I
10.3390/s23042337
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Hazardous object detection (escalators, stairs, glass doors, etc.) and avoidance are critical functional safety modules for autonomous mobile cleaning robots. Conventional object detectors have less accuracy for detecting low-feature hazardous objects and have miss detection, and the false classification ratio is high when the object is under occlusion. Miss detection or false classification of hazardous objects poses an operational safety issue for mobile robots. This work presents a deep-learning-based context-aware multi-level information fusion framework for autonomous mobile cleaning robots to detect and avoid hazardous objects with a higher confidence level, even if the object is under occlusion. First, the image-level-contextual-encoding module was proposed and incorporated with the Faster RCNN ResNet 50 object detector model to improve the low-featured and occluded hazardous object detection in an indoor environment. Further, a safe-distance-estimation function was proposed to avoid hazardous objects. It computes the distance of the hazardous object from the robot's position and steers the robot into a safer zone using detection results and object depth data. The proposed framework was trained with a custom image dataset using fine-tuning techniques and tested in real-time with an in-house-developed mobile cleaning robot, BELUGA. The experimental results show that the proposed algorithm detected the low-featured and occluded hazardous object with a higher confidence level than the conventional object detector and scored an average detection accuracy of 88.71%.
引用
收藏
页数:13
相关论文
共 41 条
  • [1] Abid A, 2017, 2017 8TH IEEE ANNUAL INFORMATION TECHNOLOGY, ELECTRONICS AND MOBILE COMMUNICATION CONFERENCE (IEMCON), P40, DOI 10.1109/IEMCON.2017.8117139
  • [2] Indoor objects detection and recognition for an ICT mobility assistance of visually impaired people
    Afif, Mouna
    Ayachi, Riadh
    Pissaloux, Edwige
    Said, Yahia
    Atri, Mohamed
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (41-42) : 31645 - 31662
  • [3] [Anonymous], PAVING ROAD ROBOT FR
  • [4] [Anonymous], INTELREALSENSE DOCUM
  • [5] Vision-based integrated mobile robotic system for real-time applications in construction
    Asadi, Khashayar
    Ramshankar, Hariharan
    Pullagurla, Harish
    Bhandare, Aishwarya
    Shanbhag, Suraj
    Mehta, Pooja
    Kundu, Spondon
    Han, Kevin
    Lobaton, Edgar
    Wu, Tianfu
    [J]. AUTOMATION IN CONSTRUCTION, 2018, 96 : 470 - 482
  • [6] Ayub Ali, 2022, 2022 IEEE International Conference on Development and Learning (ICDL), P299, DOI 10.1109/ICDL53763.2022.9962208
  • [7] Bardool K., 2019, CEUR Workshop Proceedings, V2491, P1
  • [8] Object Detection Applied to Indoor Environments for Mobile Robot Navigation
    Carolina Hernandez, Alejandra
    Gomez, Clara
    Crespo, Jonathan
    Barber, Ramon
    [J]. SENSORS, 2016, 16 (08)
  • [9] Multi-modal fusion network with multi-scale multi-path and cross-modal interactions for RGB-D salient object detection
    Chen, Hao
    Li, Youfu
    Su, Dan
    [J]. PATTERN RECOGNITION, 2019, 86 : 376 - 385
  • [10] Robots serve humans in public places-KeJia robot as a shopping assistant
    Chen, Yingfeng
    Wu, Feng
    Shuai, Wei
    Chen, Xiaoping
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2017, 14 (03):