Two-Stage Object Detection for Autonomous Mobile Robot Using Faster R-CNN

被引:0
作者
Abdul-Khalil, Syamimi [1 ]
Abdul-Rahman, Shuzlina [2 ]
Mutalib, Sofianita [2 ]
机构
[1] Univ Teknol MARA, Coll Comp Informat & Media, Sch Comp Sci, Shah Alam 40450, Selangor, Malaysia
[2] Univ Teknol MARA, Coll Comp Informat & Media, Res Initiat Grp Intelligent Syst, Shah Alam 40450, Selangor, Malaysia
来源
INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 4, INTELLISYS 2023 | 2024年 / 825卷
关键词
Autonomous mobile robot; Deep learning; Object detection;
D O I
10.1007/978-3-031-47718-8_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The advancement of Autonomous Mobile Robots (AMR) is vastly being discovered and applied to several industries. AMR contributes to the development of Artificial Intelligence (AI), which focuses on the growth of human-interaction systems. However, it is safe to say that mobile robots work closely in real-time and under changing surroundings; this creates limitations that may affect the efficiency of the application. Object detection comes in two different architectures: Single-stage detector and Two-stage detector. This research presents the experimental results of the two-stage detector, namely the Faster Region-based Convolutional Neural Network (Faster R-CNN). The experiment is applied to the SODA10M dataset, which consists of 20,000 labelled images. Extensive experiments are performed with parameters tuning the model's configuration like labelling, iteration value, and model's baseline for optimal results. The detection model is evaluated using the standard model evaluator of Mean Average Precision (mAP) to study the object detection's accuracy. Overall findings achieve the highest mAP of 37.51%, which aligns with the original research of the dataset's developer. Nevertheless, this project has identified the experiment's limitations contributing to the accuracy value of imbalanced labelling, the training environment, and the dataset size.
引用
收藏
页码:122 / 138
页数:17
相关论文
共 33 条
[1]  
Abagiu Marian, 2020, Proceedings of the 2020 International Conference and Exposition on Electrical And Power Engineering (EPE), P221, DOI 10.1109/EPE50722.2020.9305648
[2]  
AbdulKhalil S., 2023, IAES Int. J. Artif. Intell, P1033, DOI [10.11591/ijai.v12.i3.pp1033-1043, DOI 10.11591/IJAI.V12.I3.PP1033-1043]
[3]   A Review on Challenges of Autonomous Mobile Robot and Sensor Fusion Methods [J].
Alatise, Mary B. ;
Hancke, Gerhard P. .
IEEE ACCESS, 2020, 8 :39830-39846
[4]  
[Anonymous], Detectron2: APyTorch-based modular object detection library
[5]  
Aydn M., 2017, Balkan J. Electr. Comput. Eng, V5, P73
[6]   Exploring Deep Learning-Based Architecture, Strategies, Applications and Current Trends in Generic Object Detection: A Comprehensive Review [J].
Aziz, Lubna ;
Haji Salam, Md. Sah Bin ;
Sheikh, Usman Ullah ;
Ayub, Sara .
IEEE ACCESS, 2020, 8 :170461-170495
[7]   Enhancing object detection for autonomous driving by optimizing anchor generation and addressing class imbalance [J].
Carranza-Garcia, Manuel ;
Lara-Benitez, Pedro ;
Garcia-Gutierrez, Jorge ;
Riquelme, Jose C. .
NEUROCOMPUTING, 2021, 449 :229-244
[8]   On the Performance of One-Stage and Two-Stage Object Detectors in Autonomous Vehicles Using Camera Data [J].
Carranza-Garcia, Manuel ;
Torres-Mateo, Jesus ;
Lara-Benitez, Pedro ;
Garcia-Gutierrez, Jorge .
REMOTE SENSING, 2021, 13 (01) :1-23
[9]  
Damian C, 2019, 2019 INTERNATIONAL CONFERENCE ON ELECTROMECHANICAL AND ENERGY SYSTEMS (SIELMEN), DOI [10.1109/SIELMEN.2019.8905820, 10.1109/sielmen.2019.8905820]
[10]  
DAZLEE N.M.A.A., 2022, International Journal of Intelligent Systems and Applications in Engineering, V10, P129, DOI [DOI 10.18201/IJISAE.2022.276, 10.18201/ijisae.2022.276]