Echocardiography Segmentation Based on Cross-modal Data Augmentation Method

被引:1
|
作者
Jin, Songbai [1 ]
Monkam, Patrice [1 ]
Lu, Wenkai [1 ]
机构
[1] Tsinghua Univ, Dept Automat, Beijing, Peoples R China
来源
2022 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS) | 2022年
基金
北京市自然科学基金;
关键词
Ultrasound simulation; Generative Adversarial Network; Deep Learning; ULTRASOUND;
D O I
10.1109/IUS54386.2022.9957451
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Deep learning algorithms have been widely applied in the field of medical image analysis. However, it is very difficult and time-consuming to obtain a large number of accurate samples and their corresponding labels. Deep neural networks can be difficult to train without a large number of samples, which may lead to unsatisfactory performance. Currently, synthesizing medical image samples is an effective solution to this problem. One of the commonly adopted methods for synthesizing labeled ultrasound images is simulation-based approach, such as Field II software. Although these simulation-based methods have demonstrated acceptable performance, they are not only time-consuming, but also require expert experience to fine-tune parameters. Therefore, we propose a Cyc1eGAN-based method to synthesize annotated ultrasound images from computed tomography (CT) images. Our proposed method is validated on the left ventricle segmentation task. Experimental analyses demonstrate that our proposed annotated sample generation method outperforms the general Cyc1eGAN method in the context of segmentation results.
引用
收藏
页数:3
相关论文
共 50 条
  • [1] A novel cross-modal data augmentation method based on contrastive unpaired translation network for kidney segmentation in ultrasound imaging
    Guo, Shuaizi
    Sheng, Xiangyu
    Chen, Haijie
    Zhang, Jie
    Peng, Qinmu
    Wu, Menglin
    Fischer, Katherine
    Tasian, Gregory E.
    Fan, Yong
    Yin, Shi
    MEDICAL PHYSICS, 2025,
  • [2] CAESAR: concept augmentation based semantic representation for cross-modal retrieval
    Zhu, Lei
    Song, Jiayu
    Wei, Xiangxiang
    Yu, Hao
    Long, Jun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (24) : 34213 - 34243
  • [3] CAESAR: concept augmentation based semantic representation for cross-modal retrieval
    Lei Zhu
    Jiayu Song
    Xiangxiang Wei
    Hao Yu
    Jun Long
    Multimedia Tools and Applications, 2022, 81 : 34213 - 34243
  • [4] Deep learning-based retinal vessel segmentation with cross-modal evaluation
    Brea, Luisa Sanchez
    De Jesus, Danilo Andrade
    Klein, Stefan
    van Walsum, Theo
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 121, 2020, 121 : 709 - 720
  • [5] Enhanced Breast Lesion Classification via Knowledge Guided Cross-Modal and Semantic Data Augmentation
    Chen, Kun
    Guo, Yuanfan
    Yang, Canqian
    Xu, Yi
    Zhang, Rui
    Li, Chunxiao
    Wu, Rong
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT V, 2021, 12905 : 53 - 63
  • [6] Data Augmentation with Cross-Modal Variational Autoencoders (<monospace>DACMVA</monospace>) for Cancer Survival Prediction
    Rajaram, Sara
    Mitchell, Cassie S.
    INFORMATION, 2024, 15 (01)
  • [7] Survey on Cross-modal Data Entity Resolution
    Cao J.-J.
    Nie Z.-B.
    Zheng Q.-B.
    Lü G.-J.
    Zeng Z.-X.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (12): : 5822 - 5847
  • [8] A Disentanglement and Fusion Data Augmentation Approach for Echocardiography Segmentation
    Monkam, Patrice
    Jin, Songbai
    Tang, Bo
    Zhou, Xiang
    Lu, Wenkai
    2022 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS), 2022,
  • [9] HCMSL: Hybrid Cross-modal Similarity Learning for Cross-modal Retrieval
    Zhang, Chengyuan
    Song, Jiayu
    Zhu, Xiaofeng
    Zhu, Lei
    Zhang, Shichao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (01)
  • [10] Cross-modal retrieval method for thyroid ultrasound image and text based on generative adversarial network
    Xu F.
    Ma X.
    Liu L.
    Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering, 2020, 37 (04): : 641 - 651