U-Net-Embedded Gabor Kernel and Coaxial Correction Methods to Dorsal Hand Vein Image Projection System

被引:1
作者
Chen, Liukui [1 ]
Lv, Monan [1 ]
Cai, Junfeng [1 ]
Guo, Zhongyuan [2 ]
Li, Zuojin [1 ]
机构
[1] Chongqing Univ Sci & Technol, Coll Intelligent Technol & Engn, Chongqing 401331, Peoples R China
[2] Southwest Univ, Coll Elect & Informat Engn, Chongqing 400715, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 20期
关键词
auxiliary venipuncture; vein segmentation; improved U-Net; coaxial correction; vein projection system; INTRAVENOUS CATHETER INSERTION; SEGMENTATION;
D O I
10.3390/app132011222
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Vein segmentation and projection correction constitute the core algorithms of an auxiliary venipuncture device, responding to accurate venous positioning to assist puncture and reduce the number of punctures and pain of patients. This paper proposes an improved U-Net for segmenting veins and a coaxial correction for image alignment in the self-built vein projection system. The proposed U-Net is embedded by Gabor convolution kernels in the shallow layers to enhance segmentation accuracy. Additionally, to mitigate the semantic information loss caused by channel reduction, the network model is lightweighted by means of replacing conventional convolutions with inverted residual blocks. During the visualization process, a method that combines coaxial correction and a homography matrix is proposed to address the non-planarity of the dorsal hand in this paper. First, we used a hot mirror to adjust the light paths of both the projector and the camera to be coaxial, and then aligned the projected image with the dorsal hand using a homography matrix. Using this approach, the device requires only a single calibration before use. With the implementation of the improved segmentation method, an accuracy rate of 95.12% is achieved by the dataset. The intersection-over-union ratio between the segmented and original images is reached at 90.07%. The entire segmentation process is completed in 0.09 s, and the largest distance error of vein projection onto the dorsal hand is 0.53 mm. The experiments show that the device has reached practical accuracy and has values of research and application.
引用
收藏
页数:16
相关论文
共 37 条
  • [1] Chaconas K., 1990, Range from Triangulation Using an Inverse Perspective Method to Determine Relative Camera Pose
  • [2] Deep transfer learning benchmark for plastic waste classification
    Chazhoor, Anthony Ashwin Peter
    Ho, Edmond S. L.
    Gao, Bin
    Woo, Wai Lok
    [J]. INTELLIGENCE & ROBOTICS, 2022, 2 (01): : 1 - 19
  • [3] A learnable Gabor Convolution kernel for vessel segmentation
    Chen, Cheng
    Zhou, Kangneng
    Qi, Siyu
    Lu, Tong
    Xiao, Ruoxiu
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 158
  • [4] An overview of intelligent image segmentation using active contour models
    Chen, Yiyang
    Ge, Pengqiang
    Wang, Guina
    Weng, Guirong
    Chen, Hongtian
    [J]. INTELLIGENCE & ROBOTICS, 2023, 3 (01): : 23 - 55
  • [5] Dai X., 2013, P 2013 IEEE INT C IM
  • [6] Using In-Situ Projection to Support Cognitively Impaired Workers at the Workplace
    Funk, Markus
    Mayer, Sven
    Schmidt, Albrecht
    [J]. ASSETS'15: PROCEEDINGS OF THE 17TH INTERNATIONAL ACM SIGACCESS CONFERENCE ON COMPUTERS & ACCESSIBILITY, 2015, : 185 - 192
  • [7] Benchtop and Animal Validation of a Projective Imaging System for Potential Use in Intraoperative Surgical Guidance
    Gan, Qi
    Wang, Dong
    Ye, Jian
    Zhang, Zeshu
    Wang, Xinrui
    Hu, Chuanzhen
    Shao, Pengfei
    Xu, Ronald X.
    [J]. PLOS ONE, 2016, 11 (07):
  • [8] Gunawan I.P.A.S., 2018, P 2018 INT ELECT S E
  • [9] Multi-Distance Veins Projection Based on Single Axis Camera and Projector System
    Gunawan, I. Putu Adi Surya
    Sigit, Riyanto
    Gunawan, Agus Indra
    [J]. EMITTER-INTERNATIONAL JOURNAL OF ENGINEERING TECHNOLOGY, 2019, 7 (02) : 444 - +
  • [10] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778