A Study on Target Object Recognition of a Similar Shape using a Small Dataset based on the Siamese Network

被引:0
作者
Kwak J. [1 ]
Yang K.-M. [1 ]
Lee J.-I. [1 ]
Kim M.-G. [1 ]
Seo K.-H. [1 ,2 ]
机构
[1] Human-Robot Interaction Research Center, Korea Institute of Robotics and Technology Convergence (KIRO)
[2] Department of Robot and Smart System Engineering, Kyungpook National University (KNU)
关键词
Convolutional Neural Network; Deep Learning; Few-shot Learning; Object Recognition; Siamese Network;
D O I
10.5302/J.ICROS.2022.22.0102
中图分类号
学科分类号
摘要
Target object recognition by a mobile robot is based on deep learning. To train a deep learning model to differentiate between target and non-target objects, a sufficiently large dataset that can extract features for classifying objects is required. In a general working environment, securing a sufficiently large dataset is difficult because the number of people designated as workers is constantly changing. Classifying the shapes of objects that are similar requires larger datasets than classifying the shapes of objects that are not similar. Therefore, a method for recognizing target objects with similar shapes using a small dataset is required. This paper proposes a Siamese-network–based target object recognition method for recognizing objects with similar shapes based on a small dataset. A trained Siamese network is used to recognize whether the input object is the target object based on its similarity to the target object. The results of target object recognition using the proposed method were experimentally analyzed. Further, ResNet-50 was used to evaluate the performance of the proposed method. Our findings show that the proposed method recognized objects with a difference of approximately 6% using a small dataset, indicating its higher efficiency than the classification-based object recognition method. © ICROS 2022.
引用
收藏
页码:811 / 816
页数:5
相关论文
共 16 条
[1]  
Son H., Kim D., Yang S., Development of following algorithm for human-following robots using deep-learning technology and depth camera, Journal of Institute of Control, 28, 2, pp. 95-101, (2022)
[2]  
Noh J., Yang K., Park M., Lee J., Kim M., Seo K., LiDAR point cloud augmentation for mobile robot safe navigation in indoor environment, Journal of Institute of Control, 28, 1, pp. 52-58, (2022)
[3]  
Kim J., Kim D., Koo K., Position recognition and driving control for an autonomous mobile robot that tracks tile grid pattern, The Transactions of the Korean Institute of Electrical Engineers, 70, 6, pp. 945-952, (2021)
[4]  
Hsieh J., Chiang M., Fang C., Chen S., Online human action recognition using deep learning for indoor smart mobile robots, Proc. of the 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 425-433, (2021)
[5]  
Secul S., Ozkan M., Minimum distance calculation using skeletal tracking for safe human-robot interaction, Robotics and Computer-Integrated Manufacturing, 73, pp. 1-15, (2021)
[6]  
Algabir R., Choi M., Target recovery for robust deep learning-based person following in mobile robots: Online trajectory prediction, Applied Sciences, 11, 9, pp. 4165-4184, (2021)
[7]  
Fujiyoshi H., Hirakawa T., Yamashita T., Deep learning-based image recognition for autonomous driving, IATSS Research, 43, 4, pp. 244-252, (2019)
[8]  
Mandal M., Kumar L.K., Saran M.S., Vipparthi S.K., MotionRec: A unified deep framework for moving object recognition, Proc. of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2734-2743, (2020)
[9]  
He K., Zhang X., Ren S., Sun J., Deep residual learning for image recognition, Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, (2016)
[10]  
Tan M., Le Q., EfficientNet: Rethinking model scaling for convolutional neural networks, Proc. of the 36Th International Conference on Machine Learning, pp. 6105-6114, (2019)