Indoor Localization of Hand-Held OCT Probe Using Visual Odometry and Real-Time Segmentation Using Deep Learning

被引:9
作者
Qin, Xi [1 ]
Wang, Bohan [4 ]
Boegner, David [5 ]
Gaitan, Brandon [5 ]
Zheng, Yingning [2 ]
Du, Xian [3 ]
Chen, Yu [2 ]
机构
[1] Ohio Univ, Sch Elect Engn & Comp Sci, Athens, OH 45701 USA
[2] Univ Massachusetts, Dept Biomed Engn, Amherst, MA 01003 USA
[3] Univ Massachusetts, Dept Mech & Ind Engn, Amherst, MA 01003 USA
[4] Steer Tech LLC, Inst Appl Life Sci, Annapolis Jct, MD 20701 USA
[5] Univ Maryland, Fischell Dept Bioengn, Baltimore, MD 21201 USA
关键词
Cameras; Kidney; Location awareness; Image segmentation; Probes; Visualization; Biomedical measurement; Visual odometry; simultaneous mapping and localization (SLAM); optical coherence tomography (OCT); segmentation; kidney; deep learning; TRACKING;
D O I
10.1109/TBME.2021.3116514
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective: Optical coherence tomography (OCT) is an established medical imaging modality that has found widespread use due to its ability to visualize tissue structures at a high resolution. Currently, OCT hand-held imaging probes lack positional information, making it difficult or even impossible to link a specific image to the location it was originally obtained. In this study, we propose a camera-based localization method to track and record the scanner position in real-time, as well as providing a deep learning-based segmentation method. Methods: We used camera-based visual odometry (VO) and simultaneous mapping and localization (SLAM) to compute and visualize the location of a hand-held OCT imaging probe. A deep convolutional neural network (CNN) was used for kidney tubule lumens segmentation. Results: The mean absolute error (MAE) and the standard deviation (STD) for 1D translation were found to be 0.15 mm and 0.26mm respectively. For 2D translation, the MAE and STD were found to be 0.85 mm and 0.50 mm, respectively. The dice coefficient of the segmentation method was 0.7. The t-statistic of the T-test between predicted and actual average densities and predicted and actual average diameters were 7.7547e-13 and 2.2288e-15 respectively. We also experimented on a preserved kidney utilizing our localization method with automatic segmentation. Comparisons of the average density maps and average diameter maps were made between the 3D comprehensive scan and VO system scan. Conclusion: Our results demonstrate that VO can track the probe location at high accuracy, and provides a user-friendly visualization tool to review OCT 2D images in 3D space. It also indicates that deep learning can provide high accuracy and high speed for segmentation. Significance: The proposed methods can be potentially used to predict delayed graft function (DGF) in kidney transplantation.
引用
收藏
页码:1378 / 1385
页数:8
相关论文
共 27 条
[1]  
Abrate F., 2007, P 3 EUR C MOB ROB, P84
[2]   Optical coherence tomography of the living human kidney [J].
Andrews, Peter M. ;
Wang, Hsing-Wen ;
Wierwille, Jeremiah ;
Gong, Wei ;
Verbesey, Jennifer ;
Cooper, Matthew ;
Chen, Yu .
JOURNAL OF INNOVATIVE OPTICAL HEALTH SCIENCES, 2014, 7 (02)
[3]   MonoSLAM: Real-time single camera SLAM [J].
Davison, Andrew J. ;
Reid, Ian D. ;
Molton, Nicholas D. ;
Stasse, Olivier .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2007, 29 (06) :1052-1067
[4]   The Importance of Skip Connections in Biomedical Image Segmentation [J].
Drozdzal, Michal ;
Vorontsov, Eugene ;
Chartrand, Gabriel ;
Kadoury, Samuel ;
Pal, Chris .
DEEP LEARNING AND DATA LABELING FOR MEDICAL APPLICATIONS, 2016, 10008 :179-187
[5]   LSD-SLAM: Large-Scale Direct Monocular SLAM [J].
Engel, Jakob ;
Schoeps, Thomas ;
Cremers, Daniel .
COMPUTER VISION - ECCV 2014, PT II, 2014, 8690 :834-849
[6]   Dermatologist-level classification of skin cancer with deep neural networks [J].
Esteva, Andre ;
Kuprel, Brett ;
Novoa, Roberto A. ;
Ko, Justin ;
Swetter, Susan M. ;
Blau, Helen M. ;
Thrun, Sebastian .
NATURE, 2017, 542 (7639) :115-+
[7]   Relocating Underwater Features Autonomously Using Sonar-Based SLAM [J].
Fallon, Maurice F. ;
Folkesson, John ;
McClelland, Hunter ;
Leonard, John J. .
IEEE JOURNAL OF OCEANIC ENGINEERING, 2013, 38 (03) :500-513
[8]   Pedestrian tracking with shoe-mounted inertial sensors [J].
Foxlin, E .
IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2005, 25 (06) :38-46
[9]  
Foxlin E, 2002, HUM FAC ER, P163
[10]   Bags of Binary Words for Fast Place Recognition in Image Sequences [J].
Galvez-Lopez, Dorian ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2012, 28 (05) :1188-1197