Toward Detection and Localization of Instruments in Minimally Invasive Surgery

被引:103
作者
Allan, Max [1 ,2 ]
Ourselin, Sebastien [1 ,3 ]
Thompson, Steve [1 ,3 ]
Hawkes, David J. [1 ,3 ]
Kelly, John [4 ]
Stoyanov, Danail [1 ,2 ]
机构
[1] UCL, Ctr Med Image Comp, London WC1E 6BT, England
[2] UCL, Dept Comp Sci, London WC1E 6BT, England
[3] UCL, Dept Med Phys & Bioengn, London WC1E 6BT, England
[4] UCL, Div Surg & Intervent Sci, Sch Med, London WC1E 6BT, England
基金
英国工程与自然科学研究理事会;
关键词
Instrument detection and localization; robotic assisted surgery; surgical vision; RANDOM FORESTS; TRACKING;
D O I
10.1109/TBME.2012.2229278
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Methods for detecting and localizing surgical instruments in laparoscopic images are an important element of advanced robotic and computer-assisted interventions. Robotic joint encoders and sensors integrated or mounted on the instrument can provide information about the tool's position, but this often has inaccuracy when transferred to the surgeon's point of view. Vision sensors are currently a promising approach for determining the position of instruments in the coordinate frame of the surgical camera. In this study, we propose a vision algorithm for localizing the instrument's pose in 3-D leaving only rotation in the axis of the tool's shaft as an ambiguity. We propose a probabilistic supervised classification method to detect pixels in laparoscopic images that belong to surgical tools. We then use the classifier output to initialize an energy minimization algorithm for estimating the pose of a prior 3-D model of the instrument within a level set framework. We show that the proposed method is robust against noise using simulated data and we perform quantitative validation of the algorithm compared to ground truth obtained using an optical tracker. Finally, we demonstrate the practical application of the technique on in vivo data from minimally invasive surgery with traditional laparoscopic and robotic instruments.
引用
收藏
页码:1050 / 1058
页数:9
相关论文
共 32 条
[1]  
Benzeko J., 2009, P SPIE, V7261
[2]  
Bibby C, 2008, LECT NOTES COMPUT SC, V5303, P831, DOI 10.1007/978-3-540-88688-4_61
[3]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[4]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893
[5]   Recent advances in minimal access surgery [J].
Darzi, A ;
Mackay, S .
BMJ-BRITISH MEDICAL JOURNAL, 2002, 324 (7328) :31-34
[6]   Classifying color edges in video into shadow-geometry, highlight, or material transitions [J].
Gevers, T ;
Stokman, H .
IEEE TRANSACTIONS ON MULTIMEDIA, 2003, 5 (02) :237-243
[7]  
Hager G. D., 2012, IEEE T PATT IN PRESS
[8]  
Kontschieder P, 2011, IEEE I CONF COMP VIS, P2190, DOI 10.1109/ICCV.2011.6126496
[9]   Autonomous 3-D positioning of surgical instruments in robotized Laparoscopic surgery using visual servoing [J].
Krupa, A ;
Gangloff, J ;
Doignon, C ;
de Mathelin, MF ;
Morel, G ;
Leroy, J ;
Soler, L ;
Marescaux, J .
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 2003, 19 (05) :842-853
[10]   Keypoint recognition using randomized trees [J].
Lepetit, Vincent ;
Fua, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2006, 28 (09) :1465-1479