SINet: A hybrid deep CNN model for real-time detection and segmentation of surgical instruments

被引:2
作者
Liu, Zhenzhong
Zhou, Yifan
Zheng, Laiwang
Zhang, Guobin
机构
[1] Tianjin Univ Technol, Sch Mech Engn, Tianjin Key Lab Adv Mechatron Syst Design & Intel, Tianjin 300384, Peoples R China
[2] Tianjin Univ Technol, Natl Demonstrat Ctr Expt Mech & Elect Engn Educ, Tianjin, Peoples R China
关键词
Deep learning; Object detection; Surgical instruments; Semantic segmentation; MINIMALLY INVASIVE SURGERY; NEURAL-NETWORKS; LOCALIZATION; SYSTEM; TOOLS;
D O I
10.1016/j.bspc.2023.105670
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective: Detection and segmentation of surgical instruments is an indispensable technology in robot-assisted surgery that enables doctors to obtain more comprehensive visual information and further improve the safety of surgery. However, the results of a detection are more easily interfered by environmental factors, such as instrument shaking, incomplete displays and insufficient light. To overcome those issues, we designed a hybrid deep-CNN model (SINet) for real-time surgical instrument detection and segmentation. Methods: The framework employs YOLOv5 as the object detection model and introduces a GAM attention mechanism to improve its feature extraction abilities. During training, the SiLU activation function is adopted to avoid gradient explosions and unstable training situations. Specifically, the vector angle relationship between the ground truth boxes and the prediction boxes was applied in the SIoU loss function to reduce the degree of freedom of the regression and accelerate the network convergence. Finally, a semantic segmentation head is used to implement detections of the surgical instruments by paralleling the detection and segmentation. Results: The proposed method is evaluated on the m2cai16-tool-locations public dataset and achieved a significant 97.9% mean average precision (mAP), 133 frames per second (FPS), 85.7% mean intersection over union (MIoU) and 86.6% Dice. Experiment based on simulated surgery platform also shows satisfactory detection performance. Conclusion: Experimental results demonstrated that the SINet can effectively detect the pose of surgical instruments and achieves a better performance than most of the current algorithms. The method has the potential to help perform a series of surgical operations efficiently and safely.
引用
收藏
页数:11
相关论文
共 52 条
[1]   Toward Detection and Localization of Instruments in Minimally Invasive Surgery [J].
Allan, Max ;
Ourselin, Sebastien ;
Thompson, Steve ;
Hawkes, David J. ;
Kelly, John ;
Stoyanov, Danail .
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2013, 60 (04) :1050-1058
[2]  
Alshirbaji T.A., 2020, Current Directions in Biomedical Engineering, V6
[3]  
Bochkovskiy A., 2004, arXiv
[4]   Detecting Surgical Tools by Modelling Local Appearance and Global Shape [J].
Bouget, David ;
Benenson, Rodrigo ;
Omran, Mohamed ;
Riffaud, Laurent ;
Schiele, Bernt ;
Jannin, Pierre .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2015, 34 (12) :2603-2617
[5]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[6]  
Chen ZR, 2017, CHIN AUTOM CONGR, P2711, DOI 10.1109/CAC.2017.8243236
[7]  
Choi B, 2017, IEEE ENG MED BIO, P1756, DOI 10.1109/EMBC.2017.8037183
[8]   Real-time segmentation of surgical instruments inside the abdominal cavity using a joint hue saturation color feature [J].
Doignon, C ;
Graebling, P ;
de Mathelin, M .
REAL-TIME IMAGING, 2005, 11 (5-6) :429-442
[9]   Sigmoid-weighted linear units for neural network function approximation in reinforcement learning [J].
Elfwing, Stefan ;
Uchibe, Eiji ;
Doya, Kenji .
NEURAL NETWORKS, 2018, 107 :3-11
[10]   Concepts and Trends in Autonomy for Robot-Assisted Surgery [J].
Fiorini, Paolo ;
Goldberg, Ken Y. ;
Liu, Yunhui ;
Taylor, Russell H. .
PROCEEDINGS OF THE IEEE, 2022, 110 (07) :993-1011