CT-Less Whole-Body Bone Segmentation of PET Images Using a Multimodal Deep Learning Network

被引:0
作者
Bao, Nan [1 ]
Zhang, Jiaxin [2 ]
Li, Zhikun [1 ]
Wei, Shiyu [3 ]
Zhang, Jiazhen [4 ]
Greenwald, Stephen E. [5 ]
Onofrey, John A. [6 ]
Lu, Yihuan [7 ]
Xu, Lisheng [1 ]
机构
[1] Northeastern Univ, Coll Med & Biol Informat Engn, Shenyang 110167, Peoples R China
[2] Dahua Technol Co Ltd, Image Algorithm Dept, Hangzhou 310053, Peoples R China
[3] Liaoning Med Device Test Inst, Med Imaging & Software Dept, Shenyang 110179, Peoples R China
[4] Yale Univ, Dept Biomed Engn, New Haven, CT 06512 USA
[5] Queen Mary Univ London, Blizard Inst, Barts & London Sch Med & Dent, London E1 4NS, England
[6] Yale Univ, Dept Biomed Engn, Dept Radiol & Biomed Imaging, New Haven, CT 06512 USA
[7] United Imaging Healthcare, Shanghai 201807, Peoples R China
基金
中国国家自然科学基金;
关键词
Image segmentation; Computed tomography; Bones; Tumors; Cancer; Attenuation; Accuracy; Positron emission tomography; Image coding; Electronic mail; CT-less; deep learning; multimodal feature fusion; PET; whole-body bone segmentation; FDG-PET; TUMORS; RECONSTRUCTION; ATTENUATION; FUSION;
D O I
10.1109/JBHI.2024.3501386
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In bone cancer imaging, positron emission tomography (PET) is ideal for the diagnosis and staging of bone cancers due to its high sensitivity to malignant tumors. The diagnosis of bone cancer requires tumor analysis and localization, where accurate and automated wholebody bone segmentation (WBBS) is often needed. Current WBBS for PET imaging is based on paired Computed Tomography (CT) images. However, mismatches between CT and PET images often occur due to patient motion, which leads to erroneous bone segmentation and thus, to inaccurate tumor analysis. Furthermore, there are some instances where CT images are unavailable for WBBS. In this work, we propose a novel multimodal fusion network (MMF-Net) for WBBS of PET images, without the need for CT images. Specifically, the tracer activity (lambda-MLAA), attenuation map (mu-MLAA), and synthetic attenuation map (mu-DL) images are introduced into the training data. We first design a multi-encoder structure employed to fully learn modalityspecific encoding representations of the three PET modality images through independent encoding branches. Then, we propose a multimodal fusion module in the decoder to further integrate the complementary information across the three modalities. Additionally, we introduce revised convolution units, SE (Squeeze-and-Excitation) Normalization and deep supervision to improve segmentation performance. Extensive comparisons and ablation experiments, using 130 whole-body PET image datasets, show promising results. We conclude that the proposed method can achieve WBBS with moderate to high accuracy using PET information only, which potentially can be used to overcome the current limitations of CT-based approaches, while minimizing exposure to ionizing radiation.
引用
收藏
页码:1151 / 1164
页数:14
相关论文
共 67 条
  • [61] Flexible Fusion Network for Multi-Modal Brain Tumor Segmentation
    Yang, Hengyi
    Zhou, Tao
    Zhou, Yi
    Zhang, Yizhe
    Fu, Huazhu
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (07) : 3349 - 3359
  • [62] UNesT: Local spatial representation learning with hierarchical transformer for efficient medical segmentation
    Yu, Xin
    Yang, Qi
    Zhou, Yinchi
    Cai, Leon Y.
    Gao, Riqiang
    Lee, Ho Hin
    Li, Thomas
    Bao, Shunxing
    Xu, Zhoubing
    Lasko, Thomas A.
    Abramson, Richard G.
    Zhang, Zizhao
    Huo, Yuankai
    Landman, Bennett A.
    Tang, Yucheng
    [J]. MEDICAL IMAGE ANALYSIS, 2023, 90
  • [63] Yuanyuan Sun, 2018, Understanding and Interpreting Machine Learning in Medical Image Computing Applications. First International Workshops MLCN 2018, DLF 2018, and iMIMIC 2018. Held in Conjunction with MICCAI 2018. Proceedings: Lecture Notes in Computer Science (LNCS 11038), P43, DOI 10.1007/978-3-030-02628-8_5
  • [64] ITK-SNAP An interactive medical image segmentation tool to meet the need for expert-guided segmentation of complex medical images
    Yushkevich, Paul A.
    Gerig, Guido
    [J]. IEEE PULSE, 2017, 8 (04) : 54 - 57
  • [65] Cross-Modal Prostate Cancer Segmentation via Self-Attention Distillation
    Zhang, Guokai
    Shen, Xiaoang
    Zhang, Yu-Dong
    Luo, Ye
    Luo, Jihao
    Zhu, Dandan
    Yang, Hanmei
    Wang, Weigang
    Zhao, Binghui
    Lu, Jianwei
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (11) : 5298 - 5309
  • [66] Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI
    Zhu, Zhiqin
    He, Xianyu
    Qi, Guanqiu
    Li, Yuanyuan
    Cong, Baisen
    Liu, Yu
    [J]. INFORMATION FUSION, 2023, 91 : 376 - 387
  • [67] APRNet: A 3D Anisotropic Pyramidal Reversible Network With Multi-Modal Cross-Dimension Attention for Brain Tissue Segmentation in MR Images
    Zhuang, Yuzhou
    Liu, Hong
    Song, Enmin
    Ma, Guangzhi
    Xu, Xiangyang
    Hung, Chih-Cheng
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (02) : 749 - 761