Patch-based generative adversarial neural network models for head and neck MR-only planning

被引:72
|
作者
Klages, Peter [1 ]
Benslimane, Ilyes [1 ]
Riyahi, Sadegh [1 ]
Jiang, Jue [1 ]
Hunt, Margie [1 ]
Deasy, Joseph O. [1 ]
Veeraraghavan, Harini [1 ]
Tyagi, Neelam [1 ]
机构
[1] Mem Sloan Kettering Canc Ctr, Med Phys, 1275 York Ave, New York, NY 10021 USA
关键词
conditional generative adversarial networks (cGAN); CycleGAN; generative adversarial networks (GAN); MR-Guided Radiotherapy; pix2pix; synthetic CT generation; SYNTHETIC CT; RADIOTHERAPY; DELINEATION; IMAGES;
D O I
10.1002/mp.13927
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose To evaluate pix2pix and CycleGAN and to assess the effects of multiple combination strategies on accuracy for patch-based synthetic computed tomography (sCT) generation for magnetic resonance (MR)-only treatment planning in head and neck (HN) cancer patients. Materials and methods Twenty-three deformably registered pairs of CT and mDixon FFE MR datasets from HN cancer patients treated at our institution were retrospectively analyzed to evaluate patch-based sCT accuracy via the pix2pix and CycleGAN models. To test effects of overlapping sCT patches on estimations, we (a) trained the models for three orthogonal views to observe the effects of spatial context, (b) we increased effective set size by using per-epoch data augmentation, and (c) we evaluated the performance of three different approaches for combining overlapping Hounsfield unit (HU) estimations for varied patch overlap parameters. Twelve of twenty-three cases corresponded to a curated dataset previously used for atlas-based sCT generation and were used for training with leave-two-out cross-validation. Eight cases were used for independent testing and included previously unseen image features such as fused vertebrae, a small protruding bone, and tumors large enough to deform normal body contours. We analyzed the impact of MR image preprocessing including histogram standardization and intensity clipping on sCT generation accuracy. Effects of mDixon contrast (in-phase vs water) differences were tested with three additional cases. The sCT generation accuracy was evaluated using mean absolute error (MAE) and mean error (ME) in HU between the plan CT and sCT images. Dosimetric accuracy was evaluated for all clinically relevant structures in the independent testing set and digitally reconstructed radiographs (DRRs) were evaluated with respect to the plan CT images. Results The cross-validated MAEs for the whole-HN region using pix2pix and CycleGAN were 66.9 +/- 7.3 vs 82.3 +/- 6.4 HU, respectively. On the independent testing set with additional artifacts and previously unseen image features, whole-HN region MAEs were 94.0 +/- 10.6 and 102.9 +/- 14.7 HU for pix2pix and CycleGAN, respectively. For patients with different tissue contrast (water mDixon MR images), the MAEs increased to 122.1 +/- 6.3 and 132.8 +/- 5.5 HU for pix2pix and CycleGAN, respectively. Our results suggest that combining overlapping sCT estimations at each voxel reduced both MAE and ME compared to single-view non-overlapping patch results. Absolute percent mean/max dose errors were 2% or less for the PTV and all clinically relevant structures in our independent testing set, including structures with image artifacts. Quantitative DRR comparison between planning CTs and sCTs showed agreement of bony region positions to The dosimetric and MAE based accuracy, along with the similarity between DRRs from sCTs, indicate that pix2pix and CycleGAN are promising methods for MR-only treatment planning for HN cancer. Our methods investigated for overlapping patch-based HU estimations also indicate that combining transformation estimations of overlapping patches is a potential method to reduce generation errors while also providing a tool to potentially estimate the MR to CT aleatoric model transformation uncertainty. However, because of small patient sample sizes, further studies are required.
引用
收藏
页码:626 / 642
页数:17
相关论文
共 40 条
  • [31] Super-Resolution Reconstruction of Weak Targets on Water Surfaces: A Generative Adversarial Network Approach Based on Implicit Neural Representation
    Bi, Qilin
    Lin, Zhiqian
    Chen, Boren
    Lai, Minlin
    Guo, Yanya
    Lv, Youjie
    Tang, Yali
    Huang, Chuxin
    TRAITEMENT DU SIGNAL, 2023, 40 (06) : 2701 - 2710
  • [32] Patch-Based Siamese 3D Convolutional Neural Network for Early Alzheimer's Disease Using Multi-Modal Approach
    Kumari, Rashmi
    Das, Subhranil
    Nigam, Akriti
    Pushkar, Shashank
    IETE JOURNAL OF RESEARCH, 2024, 70 (04) : 3804 - 3822
  • [33] Predicting radiation therapy outcome of pituitary gland in head and neck cancer using Artificial Neural Network (ANN) and radiobiological models
    Shahbazi, S.
    Ferdosi, R.
    Malekzadeh, R.
    Zamiri, R. Egdam
    Mesbahi, A.
    INTERNATIONAL JOURNAL OF RADIATION RESEARCH, 2023, 21 (01): : 53 - 59
  • [34] Implementation, Dosimetric Assessment, and Treatment Validation of Knowledge-Based Planning (KBP) Models in VMAT Head and Neck Radiation Oncology
    Fanou, Anna-Maria
    Patatoukas, Georgios
    Chalkia, Marina
    Kollaros, Nikolaos
    Kougioumtzopoulou, Andromachi
    Kouloulias, Vassilis
    Platoni, Kalliopi
    BIOMEDICINES, 2023, 11 (03)
  • [35] Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis
    Touati, Redha
    Trung Le, William
    Kadoury, Samuel
    PHYSICS IN MEDICINE AND BIOLOGY, 2024, 69 (15)
  • [36] Impact of bias field correction on 0.35 T pelvic MR images: evaluation on generative adversarial network-based OARs' auto-segmentation and visual grading assessment
    Vagni, Marica
    Tran, Huong Elena
    Catucci, Francesco
    Chiloiro, Giuditta
    D'Aviero, Andrea
    Re, Alessia
    Romano, Angela
    Boldrini, Luca
    Kawula, Maria
    Lombardo, Elia
    Kurz, Christopher
    Landry, Guillaume
    Belka, Claus
    Indovina, Luca
    Gambacorta, Maria Antonietta
    Cusumano, Davide
    Placidi, Lorenzo
    FRONTIERS IN ONCOLOGY, 2024, 14
  • [37] Comprehensive deep learning-based framework for automatic organs-at-risk segmentation in head-and-neck and pelvis for MR-guided radiation therapy planning
    Czipczer, Vanda
    Kolozsvari, Bernadett
    Deak-Karancsi, Borbala
    Capala, Marta E.
    Pearson, Rachel A.
    Borzasi, Emoke
    Egyud, Zsofia
    Gaal, Szilvia
    Kelemen, Gyongyi
    Koszo, Renata
    Paczona, Viktor
    Vegvary, Zoltan
    Karancsi, Zsofia
    Kekesi, Adam
    Czunyi, Edina
    Irmai, Blanka H.
    Keresnyei, Nora G.
    Nagypal, Petra
    Czabany, Renata
    Gyalai, Bence
    Tass, Bulcsu P.
    Cziria, Balazs
    Cozzini, Cristina
    Estkowsky, Lloyd
    Ferenczi, Lehel
    Fronto, Andras
    Maxwell, Ross
    Megyeri, Istvan
    Mian, Michael
    Tan, Tao
    Wyatt, Jonathan
    Wiesinger, Florian
    Hideghety, Katalin
    McCallum, Hazel
    Petit, Steven F.
    Rusko, Laszlo
    FRONTIERS IN PHYSICS, 2023, 11
  • [38] Survival prediction of squamous cell head and neck cancer patients based on radiomic features selected from lung cancer patients using artificial neural network
    Kamezawa, H.
    Arimura, H.
    Soufi, M.
    MEDICAL IMAGING 2018: IMAGING INFORMATICS FOR HEALTHCARE, RESEARCH, AND APPLICATIONS, 2018, 10579
  • [39] A neural network based 3D/3D image registration quality evaluator for the head-and-neck patient setup in the absence of a ground truth
    Wu, Jian
    Murphy, Martin J.
    MEDICAL PHYSICS, 2010, 37 (11) : 5756 - 5764
  • [40] In vivo EPID-based daily treatment error identification for volumetric-modulated arc therapy in head and neck cancers with a hierarchical convolutional neural network: a feasibility study
    Zeng, Yiling
    Li, Heng
    Chang, Yu
    Han, Yang
    Liu, Hongyuan
    Pang, Bo
    Han, Jun
    Hu, Bin
    Cheng, Junping
    Zhang, Sheng
    Yang, Kunyu
    Quan, Hong
    Yang, Zhiyong
    PHYSICAL AND ENGINEERING SCIENCES IN MEDICINE, 2024, 47 (03) : 907 - 917