Probe Positioning for Robot-Assisted Intraoperative Ultrasound Imaging Using Deep Reinforcement Learning

被引:0
作者
Hu, Y. [1 ]
Huang, Y. [2 ]
Song, A. [2 ]
Jones, C. K. [1 ]
Siewerdsen, J. H. [1 ,2 ,3 ]
Basar, B. [4 ]
Helm, P. A. [4 ]
Uneri, A. [2 ]
机构
[1] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD USA
[2] Johns Hopkins Univ, Dept Biomed Engn, Baltimore, MD USA
[3] Univ Texas MD Anderson Canc Ctr, Dept Imaging Phys, Houston, TX USA
[4] Medtronic, Littleton, MA USA
来源
IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING, MEDICAL IMAGING 2024 | 2024年 / 12928卷
关键词
Image-guided surgery; ultrasound imaging; robotic ultrasound; reinforcement learning;
D O I
10.1117/12.3006918
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Purpose. Finding desired scan planes in ultrasound (US) imaging is a critical first task that can be time-consuming, influenced by operator experience, and subject to inter-operator variability. To circumvent these problems, interventional US imaging often necessitates dedicated, experienced sonographers in the operating room. This work presents a new approach leveraging deep reinforcement learning (RL) to assist probe positioning. Methods. A deep Q-network (DQN) is applied and evaluated for renal imaging and is tasked with locating the dorsal US scan plane. To circumvent the need for large labeled datasets, images were resliced from a large dataset of CT volumes and synthesized to US images using Field II, CycleGAN, and U-GAT-IT. The algorithm was evaluated on both synthesized and real US images, and its performance was quantified in terms of the agent's accuracy in reaching the target scan plane. Results. Learning-based synthesis methods performed better than the physics-based approach, achieving comparable image quality when qualitatively compared to real US images. The RL agent was successful in reaching target scan planes when adjusting the probe's rotation, with the U-GAT-IT model demonstrating superior generalizability (80.3% reachability) compared to CycleGAN (54.8% reachability). Conclusions. The approach presents a novel RL training strategy using image synthesis for automated US probe positioning. Ongoing efforts aim to evaluate advanced DQN models, image-based reward functions, and support probe motion with higher degrees of freedom.
引用
收藏
页数:5
相关论文
共 13 条
[1]  
De Silva T., 2018, Phys Med Biol, V63, P1
[2]   Intraoperative ultrasound in brain tumor surgery: A review and implementation guide [J].
Dixon, Luke ;
Lim, Adrian ;
Grech-Sollars, Matthew ;
Nandi, Dipankar ;
Camp, Sophie .
NEUROSURGICAL REVIEW, 2022, 45 (04) :2503-2515
[3]  
Jensen J. A., 1996, Medical & Biological Engineering & Computing, V34, P351
[4]  
Kim J., 2018, 8 INT C LEARN REPR I, P1
[5]   Autonomous Navigation of an Ultrasound Probe Towards Standard Scan Planes with Deep Reinforcement Learning [J].
Li, Keyu ;
Wang, Jian ;
Xu, Yangxin ;
Qin, Hao ;
Liu, Dongsheng ;
Liu, Li ;
Meng, Max Q-H .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :8302-8308
[6]   Diagnostic and procedural intraoperative ultrasound: technique, tips and tricks for optimizing results [J].
Lubner, Meghan G. ;
Gettle, Lori Mankowski ;
Kim, David H. ;
Ziemlewicz, Timothy J. ;
Dahiya, Nirvikar ;
Pickhardt, Perry .
BRITISH JOURNAL OF RADIOLOGY, 2021, 94 (1121)
[7]   Integration of free-hand 3D ultrasound and mobile C-arm cone-beam CT: Feasibility and characterization for real-time guidance of needle insertione [J].
Marinetto, E. ;
Uneri, A. ;
De Silva, T. ;
Reaungamornrat, S. ;
Zbijewski, W. ;
Sisniega, A. ;
Vogt, S. ;
Kleinszig, G. ;
Pascau, J. ;
Siewerdsen, J. H. .
COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2017, 58 :13-22
[8]  
Singla R., 2023, The Open Kidney Ultrasound Data Set
[9]   Ultrasound-based navigation for open liver surgery using active liver tracking [J].
Smit, Jasper N. ;
Kuhlmann, Koert F. D. ;
Ivashchenko, Oleksandra, V ;
Thomson, Bart R. ;
Lango, Thomas ;
Kok, Niels F. M. ;
Fusaglia, Matteo ;
Ruers, Theo J. M. .
INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2022, 17 (10) :1765-1773
[10]  
Vagdargi P, 2022, IEEE T MED ROBOT BIO, V4, P28, DOI [10.1109/TMRB.2021.3125322, 10.1109/tmrb.2021.3125322]