Implicit neural representation for radiation therapy dose distribution

被引:10
作者
Vasudevan, Varun [1 ]
Shen, Liyue [2 ]
Huang, Charles [3 ]
Chuang, Cynthia [4 ]
Islam, Md Tauhidul [4 ]
Ren, Hongyi [2 ]
Yang, Yong [4 ]
Dong, Peng [4 ]
Xing, Lei [2 ,4 ]
机构
[1] Stanford Univ, Inst Computat & Math Engn, Stanford, CA 94305 USA
[2] Stanford Univ, Dept Elect Engn, Stanford, CA 94305 USA
[3] Stanford Univ, Dept Bioengn, Stanford, CA 94305 USA
[4] Stanford Univ, Dept Radiat Oncol, Stanford, CA 94305 USA
基金
美国国家卫生研究院;
关键词
dose distribution; implicit neural representation; sinusoidal activation; RESOLUTION; NETWORK;
D O I
10.1088/1361-6560/ac6b10
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Dose distribution data plays a pivotal role in radiotherapy treatment planning. The data is typically represented using voxel grids, and its size ranges from 10(6) to 10(8). A concise representation of the treatment plan is of great value in facilitating treatment planning and downstream applications. This work aims to develop an implicit neural representation of 3D dose distribution data. Approach. Instead of storing the dose values at each voxel, in the proposed approach, the weights of a multilayer perceptron (MLP) are employed to characterize the dosimetric data for plan representation and subsequent applications. We train a coordinate-based MLP with sinusoidal activations to map the voxel spatial coordinates to the corresponding dose values. We identify the best architecture for a given parameter budget and use that to train a model for each patient. The trained MLP is evaluated at each voxel location to reconstruct the dose distribution. We perform extensive experiments on dose distributions of prostate, spine, and head and neck tumor cases to evaluate the quality of the proposed representation. We also study the change in representation quality by varying model size and activation function. Main results. Using coordinate-based MLPs with sinusoidal activations, we can learn implicit representations that achieve a mean-squared error of 10(-6) and peak signal-to-noise ratio greater than 50 dB at a target bitrate of similar to 1 across all the datasets, with a compression ratio of similar to 32. Our results also show that model sizes with a bitrate of 1-2 achieve optimal accuracy. For smaller bitrates, performance starts to drop significantly. Significance. The proposed model provides a low-dimensional, implicit, and continuous representation of 3D dose data. In summary, given a dose distribution, we systematically show how to find a compact model to fit the data accurately. This study lays the groundwork for future applications of neural representations of dose data in radiation oncology.
引用
收藏
页数:12
相关论文
共 34 条
  • [1] Accuray, 2017, CYBERKNIFE SRSSBRT S
  • [2] [Anonymous], 5 INT C LEARN REPPR
  • [3] Chen HB, 2021, ADV NEUR IN, V34
  • [4] Learning Implicit Fields for Generative Shape Modeling
    Chen, Zhiqin
    Zhang, Hao
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5932 - 5941
  • [5] A feasibility study for predicting optimal radiation therapy dose distributions of prostate cancer patients from patient anatomy using deep learning
    Dan Nguyen
    Long, Troy
    Jia, Xun
    Lu, Weiguo
    Gu, Xuejun
    Iqbal, Zohaib
    Jiang, Steve
    [J]. SCIENTIFIC REPORTS, 2019, 9 (1)
  • [6] Deep DoseNet: a deep neural network for accurate dosimetric transformation between different spatial resolutions and/or different dose calculation algorithms for precision radiation therapy
    Dong, Peng
    Xing, Lei
    [J]. PHYSICS IN MEDICINE AND BIOLOGY, 2020, 65 (03)
  • [7] Dupont E., 2021, ARXIV210204776
  • [8] Dupont E., 2021, WORKSHOP ICLR 2021
  • [9] Fu JQ, 2020, 2020 13TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI 2020), P721, DOI 10.1109/CISP-BMEI51763.2020.9263502
  • [10] Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
    Jacob, Benoit
    Kligys, Skirmantas
    Chen, Bo
    Zhu, Menglong
    Tang, Matthew
    Howard, Andrew
    Adam, Hartwig
    Kalenichenko, Dmitry
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2704 - 2713