Deep reinforcement learning in medical imaging: A literature review

被引:126
作者
Zhou, S. Kevin [1 ,2 ,3 ]
Le, Hoang Ngan [4 ]
Luu, Khoa [4 ]
Nguyen, Hien, V [5 ]
Ayache, Nicholas [6 ]
机构
[1] Univ Sci & Technol China, Sch Biomed Engn, Med Imaging Robot & Analyt Comp Lab & Enigineerin, Hefei, Peoples R China
[2] Univ Sci & Technol China, Suzhou Inst Adv Res, Hefei, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc Chinese Acad Sc, Beijing, Peoples R China
[4] Univ Arkansas, CSCE Dept, Fayetteville, AR 72701 USA
[5] Univ Houston, ECE Dept, Houston, TX 77004 USA
[6] Sophia Antipolis Mediterranean Ctr, INRIA, Valbonne, France
基金
美国国家科学基金会;
关键词
Deep reinforcement learning; Medical imaging; Survey; POLICY SEARCH; LANDMARK; NETWORK; OPTIMIZATION; FRAMEWORK; AGENTS;
D O I
10.1016/j.media.2021.102193
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning (DRL) augments the reinforcement learning framework, which learns a sequence of actions that maximizes the expected reward, with the representative power of deep neural networks. Recent works have demonstrated the great potential of DRL in medicine and healthcare. This paper presents a literature review of DRL in medical imaging. We start with a comprehensive tutorial of DRL, including the latest model-free and model-based algorithms. We then cover existing DRL applications for medical imaging, which are roughly divided into three main categories: (i) parametric medical image analysis tasks including landmark detection, object/lesion detection, registration, and view plane localization; (ii) solving optimization tasks including hyperparameter tuning, selecting augmentation strategies, and neural architecture search; and (iii) miscellaneous applications including surgical gesture segmentation, personalized mobile health intervention, and computational model personalization. The paper concludes with discussions of future perspectives. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:20
相关论文
共 166 条
[61]   Reinforcement learning in robotics: A survey [J].
Kober, Jens ;
Bagnell, J. Andrew ;
Peters, Jan .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1238-1274
[62]  
Koller D., 2009, Probabilistic graphical models: Principles and techniques
[63]  
Konda VR, 2000, ADV NEUR IN, V12, P1008
[64]  
Krebs Julian, 2017, Medical Image Computing and Computer Assisted Intervention MICCAI 2017. 20th International Conference. Proceedings: LNCS 10433, P344, DOI 10.1007/978-3-319-66182-7_40
[65]   Patient-specific models of cardiac biomechanics [J].
Krishnamurthy, Adarsh ;
Villongco, Christopher T. ;
Chuang, Joyce ;
Frank, Lawrence R. ;
Nigam, Vishal ;
Belezzuoli, Ernest ;
Stark, Paul ;
Krummen, David E. ;
Narayan, Sanjiv ;
Omens, Jeffrey H. ;
McCulloch, Andrew D. ;
Kerckhoffs, Roy C. P. .
JOURNAL OF COMPUTATIONAL PHYSICS, 2013, 244 :4-21
[66]   Model-based contextual policy search for data-efficient generalization of robot skills [J].
Kupcsik, Andras ;
Deisenroth, Marc Peter ;
Peters, Jan ;
Poh, Loh Ai ;
Vadakkepat, Prahlad ;
Neumann, Gerhard .
ARTIFICIAL INTELLIGENCE, 2017, 247 :415-439
[67]  
Kupcsik Andras Gabor, 2013, P AAAI C ART INT, P1401, DOI DOI 10.1609/AAAI.V27I1.8546
[68]  
Kurutach T, 2018, MODEL ENSEMBLE TRUST
[69]  
Lagoudakis M.G, 2017, VALUE FUNCTION APPRO, P1311
[70]  
Lay Nathan, 2013, Inf Process Med Imaging, V23, P450, DOI 10.1007/978-3-642-38868-2_38