High accuracy keyway angle identification using VGG16-based learning method

被引:7
作者
Sarker, Soma [1 ]
Tushar, Sree Nirmillo Biswash [2 ]
Chen, Heping [1 ]
机构
[1] Texas State Univ, Ingram Sch Engn, San Marcos, TX 78666 USA
[2] Univ Tennessee, Dept Elect Engn, Knoxville, TN 37996 USA
关键词
Industrial robot; Manufacturing automation; CNN; Computer vision; Machine learning; RULE-BASED APPROACH; COMPUTER VISION; CLASSIFICATION;
D O I
10.1016/j.jmapro.2023.04.019
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Aligning perforated pipes in different manufacturing workstations is critical for ensuring the quality of the final product in the oil&gas industry. Keyways inside the pipes are typically used for alignment. In order to automate the alignment process using an industrial robot, the keyway angle must be identified accurately. Because the environmental conditions keep changing in the shop floor, current methods cannot satisfy the accurate alignment requirement. Recently, VGG16 has become one of the most effective ways to deal with vision-based problems with different lighting conditions. Therefore, this paper proposes a method based on the VGG-16 architecture to identify the keyway angle to satisfy the manufacturing requirement (angle accuracy ) in different lighting conditions. In order to demonstrate the effectiveness of the proposed method, two traditional methods, a commercial vision method, and a geometrical rule based method, are also investigated. The comparisons of the three methods show that the proposed method performs best, yielding an angle error less than 1 degrees in 98.38% of the testing images. The research results indicate that VGG-16 based method has significant potential to improve manufacturing processes.
引用
收藏
页码:223 / 233
页数:11
相关论文
共 42 条
  • [11] Computer Vision in Healthcare Applications
    Gao, Junfeng
    Yang, Yong
    Lin, Pan
    Park, Dong Sun
    [J]. JOURNAL OF HEALTHCARE ENGINEERING, 2018, 2018
  • [12] Gomolka Z, 2015, MEAS AUTOM MONIT, V61
  • [13] Graves C., 1998, Sensor Review, V18, P178, DOI 10.1108/02602289810226390
  • [14] Hara K, 2017, Arxiv, DOI arXiv:1702.01499
  • [15] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [16] Islam SB, 2020, 2020 10TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), P537, DOI [10.1109/ccwc47524.2020.9031190, 10.1109/CCWC47524.2020.9031190]
  • [17] Joshi P., 2016, OpenCV By Example
  • [18] Juang CF, 2009, IEEE T SYST MAN CY A, V39, P119, DOI 10.1109/TSMCA.2009.2008397
  • [19] A critical review on computer vision and artificial intelligence in food industry
    Kakani, Vijay
    Nguyen, Van Huan
    Kumar, Basivi Praveen
    Kim, Hakil
    Pasupuleti, Visweswara Rao
    [J]. JOURNAL OF AGRICULTURE AND FOOD RESEARCH, 2020, 2
  • [20] A fuzzy rule-based approach to scene description involving spatial relationships
    Keller, JM
    Wang, XM
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2000, 80 (01) : 21 - 41