A novel directional object detection method for piled objects using a hybrid region-based convolutional neural network

被引:23
作者
Chiu, Ming-Chuan [1 ]
Tsai, Ho-Yen [1 ]
Chiu, Jing-Er [2 ]
机构
[1] Natl Tsing Hua Univ, Dept Ind Engn & Engn Management, Hsinchu, Taiwan
[2] Natl Yunlin Univ Sci & Technol, Dept Ind Engn & Management, Touliu, Yunlin, Taiwan
关键词
Piled object; Directional object detection; Deep learning; Smart manufacturing;
D O I
10.1016/j.aei.2021.101448
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Digital transformation is an information technology (IT) process that integrates digital information with operating processes. Its introduction to the workplace can promote the development of progressively efficient manufacturing processes, accelerating competition in terms of speed and production capacity. Equipment combined with computer vision has begun to replace manpower in certain industries including manufacturing. However, current object detection methods are unable to identify the actual rotation angle of a specific grasped target while objects are piled. Hence this study proposes a framework based on deep learning that integrates two object detection models. Faster R-CNN (region based convolutional neural network) is utilized to search for the direction reference point of the target, and Mask R-CNN is adopted to obtain the segmentation that not only forms the basis of an area filter but also generates a rotated bounding box by minAreaRect function. After integrating the output from two models, the location and actual rotated angle of target can be obtained. The purpose of this research is to provide the robot arm with the position and angle information of the object located on the top for grasping. An empirical dataset of piled footwear insoles was employed to test the proposed method during the assembly process. Results show that the accuracy of the detection reached 96.26%. The implementation of proposed method in the manufacturing process not only can save man power who responsible for sorting out products but also reduce process time to enlarge production capacity. The proposed method can serve as a part of smart manufacturing system to enhance the enterprise's competitiveness in the future.
引用
收藏
页数:15
相关论文
共 59 条
[21]   Smart Manufacturing: Past Research, Present Findings, and Future Directions [J].
Kang, Hyoung Seok ;
Lee, Ju Yeon ;
Choi, SangSu ;
Kim, Hyun ;
Park, Jun Hee ;
Son, Ji Yeon ;
Kim, Bo Hyun ;
Noh, Sang Do .
INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING-GREEN TECHNOLOGY, 2016, 3 (01) :111-128
[22]   T-CNN: Tubelets With Convolutional Neural Networks for Object Detection From Videos [J].
Kang, Kai ;
Li, Hongsheng ;
Yan, Junjie ;
Zeng, Xingyu ;
Yang, Bin ;
Xiao, Tong ;
Zhang, Cong ;
Wang, Zhe ;
Wang, Ruohui ;
Wang, Xiaogang ;
Ouyang, Wanli .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (10) :2896-2907
[23]  
Karaoguz H, 2019, IEEE INT CONF ROBOT, P4953, DOI [10.1109/icra.2019.8793751, 10.1109/ICRA.2019.8793751]
[24]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[25]  
Kulkarni Ruturaj, 2018, 2018 4 INT C COMPUTI
[26]  
Lee S, 2015, PERSPECT CONTEMP KOR, P1, DOI 10.1109/APMC.2015.7411773
[27]   Focal Loss for Dense Object Detection [J].
Lin, Tsung-Yi ;
Goyal, Priya ;
Girshick, Ross ;
He, Kaiming ;
Dollar, Piotr .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (02) :318-327
[28]   Feature Pyramid Networks for Object Detection [J].
Lin, Tsung-Yi ;
Dollar, Piotr ;
Girshick, Ross ;
He, Kaiming ;
Hariharan, Bharath ;
Belongie, Serge .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :936-944
[29]   SSD: Single Shot MultiBox Detector [J].
Liu, Wei ;
Anguelov, Dragomir ;
Erhan, Dumitru ;
Szegedy, Christian ;
Reed, Scott ;
Fu, Cheng-Yang ;
Berg, Alexander C. .
COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 :21-37
[30]  
Liu ZK, 2017, IEEE IMAGE PROC, P900, DOI 10.1109/ICIP.2017.8296411