Visual Navigation in Unstructured Environments and the Development of a Multi-purpose Mobile Robot Platform

被引:0
作者
Sim, Hyun-Jae [1 ]
Kim, Joshua [2 ]
Hong, Jungee [1 ]
Nam, Seung-Won [1 ]
Kim, Yong-Hee [1 ]
Heo, Jae [1 ]
Hwang, Hwan-Cheol [1 ,2 ]
Kim, Kwang-Ki [1 ]
机构
[1] Department of Electrical and Computer Engineering, Inha University
[2] ICT Convergence Research Institute, SJ Tech
关键词
autonomous mobile robot; delivery robot; navigation; semantic segmentation; vision control;
D O I
10.5302/J.ICROS.2024.24.0101
中图分类号
学科分类号
摘要
In this paper, we present the development of a visual navigation and mobile robot platform designed for autonomous driving in outdoor unstructured environments. To address the challenges posed by outdoor environments, where inter-object features are ambiguous and environmental conditions are highly irregular, we apply a semantic segmentation technique to the mobile robot's local navigation system, enabling it to move within a navigable region. A simplified segmentation structure is incorporated for processing large volumes of visual data and is integrated with vision-based control strategies to facilitate effective navigation planning. Additionally, we propose a supplementary method that involves road segmentation, which uses depth information to ensure stable and robust driving. Our research also includes the design of a wheeled mobile robot capable of operating in various environments, highlighting its practical applicability across diverse fields. The potential of this platform is validated through empirical evaluations conducted with self-developed real robots in various driving scenarios, demonstrating over 80% accuracy in navigable region classification and over 90% accuracy in road segmentation performance under dynamic conditions. © ICROS 2024.
引用
收藏
页码:913 / 923
页数:10
相关论文
共 30 条
  • [1] Parker L.E., Current State of the Art in Distributed Autonomous Mobile Robotics, Distributed Autonomous Robotic Systems, 4, pp. 3-12, (2000)
  • [2] Kim B.-S., Choi J.H., Lee C., Oh S., Development and Control of an Autonomous User-Tracking Golf Caddie Robot, Journal of Institute of Control, Robotics, and Systems (In Korean), 28, 2, pp. 102-108, (2022)
  • [3] Kim C., Kim B., Lee B., Structural Safety Evaluation of Tracking Driving Golf Caddy Robot, Proceedings of the Korean Society of Mechanical Engineers (KSME) Conference (In Korean), (2022)
  • [4] Chghaf M., Rodriguez S., Ouardi A.E., Camera, Lidar and Multi-Modal Slam Systems for Autonomous Ground Vehicles: A Survey, Journal of Intelligent & Robotic Systems, 105, 1, (2022)
  • [5] Berrio J.S., Shan M., Worrall S., Nebot E., Camera-Lidar Integration: Probabilistic Sensor Fusion for Semantic mapping,” IEEE Transactions on Intelligent Transportation Systems, 23, 7, pp. 7637-7652, (2021)
  • [6] Debeunne C., Vivet D., A review of visual-lidar fusion based simultaneous localization and mapping, Sensors, 20, 7, (2020)
  • [7] Cheng J., Zhang L., Chen Q., Hu X., Cai J., A Review of Visual Slam Methods for Autonomous Driving vehicles,” Engineering Applications of Artificial Intelligence, 114, (2022)
  • [8] Lan W., Dang J., Wang Y., Wang S., Pedestrian Detection Based on Yolo Network Model, IEEE International Conference on Mechatronics and Automation (ICMA), pp. 1547-1551, (2018)
  • [9] Tao J., Wang H., Zhang X., Li X., Yang H., An Object Detection System Based on Yolo in Traffic Scene, 6Th International Conference on Computer Science and Network Technology (ICCSNT), pp. 315-319, (2017)
  • [10] Shinde S., Kothari A., Gupta V., Yolo based human action recognition and localization, Procedia Computer Science, 133, pp. 831-838, (2018)