Using a deep learning approach for implanted seed detection on fluoroscopy images in prostate brachytherapy

被引:2
作者
Yuan, Andy [1 ]
Podder, Tarun [2 ]
Yuan, Jiankui [2 ]
Zheng, Yiran [2 ]
机构
[1] Youngstown State Univ, Youngstown, OH USA
[2] Univ Hosp, Cleveland Med Ctr, Cleveland, OH 44106 USA
关键词
deep learning; prostate seed implant; brachytherapy; automatic seed identification;
D O I
10.5114/jcb.2023.125512
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Purpose: To apply a deep learning approach to automatically detect implanted seeds on a fluoroscopy image in prostate brachytherapy. Material and methods: Forty-eight fluoroscopy images of patients, who underwent permanent seed implant (PSI) were used for this study after our Institutional Review Boards approval. Pre-processing procedures that were used to prepare for the training data, included encapsulating each seed in a bounding box, re-normalizing seed dimension, cropping to a region of prostate, and converting fluoroscopy image to PNG format. We employed a pre-trained faster region convolutional neural network (R-CNN) from PyTorch library for automatic seed detection, and leave-one-out cross-validation (LOOCV) procedure was applied to evaluate the performance of the model. Results: Almost all cases had mean average precision (mAP) greater than 0.91, with most cases (83.3%) having a mean average recall (mAR) above 0.9. All cases achieved F1-scores exceeding 0.91. The averaged results for all the cases were 0.979, 0.937, and 0.957 for mAP, mAR, and F1-score, respectively. Conclusions: Although there are limitations shown in interpreting overlapping seeds, our model is reasonably accurate and shows potential for further applications.
引用
收藏
页码:69 / 74
页数:6
相关论文
共 17 条
[1]  
acs, About Us
[2]   Brachytherapy: An overview for clinicians [J].
Chargari, Cyrus ;
Deutsch, Eric ;
Blanchard, Pierre ;
Gouy, Sebastien ;
Martelli, Helene ;
Guerin, Florent ;
Dumas, Isabelle ;
Bossi, Alberto ;
Morice, Philippe ;
Viswanathan, Akila N. ;
Haie-Meder, Christine .
CA-A CANCER JOURNAL FOR CLINICIANS, 2019, 69 (05) :386-401
[3]   Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction [J].
Cheng, Hao ;
Garrick, Dorian J. ;
Fernando, Rohan L. .
JOURNAL OF ANIMAL SCIENCE AND BIOTECHNOLOGY, 2017, 8
[4]  
Drid K, IMAGE SIGNAL PROCESS, V12119
[5]   The PASCAL Visual Object Classes Challenge: A Retrospective [J].
Everingham, Mark ;
Eslami, S. M. Ali ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 111 (01) :98-136
[6]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[7]  
Kisantal M., 2019, 9 INT C ADV COMP INF
[8]   Microsoft COCO: Common Objects in Context [J].
Lin, Tsung-Yi ;
Maire, Michael ;
Belongie, Serge ;
Hays, James ;
Perona, Pietro ;
Ramanan, Deva ;
Dollar, Piotr ;
Zitnick, C. Lawrence .
COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 :740-755
[9]   An Evaluation of Deep Learning Methods for Small Object Detection [J].
Nguyen, Nhat-Duy ;
Do, Tien ;
Ngo, Thanh Duc ;
Le, Duy-Dinh .
JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING, 2020, 2020
[10]  
Paszke A, 2019, ADV NEUR IN, V32