Echocardiography Segmentation Based on Cross-modal Data Augmentation Method

被引:1
|
作者
Jin, Songbai [1 ]
Monkam, Patrice [1 ]
Lu, Wenkai [1 ]
机构
[1] Tsinghua Univ, Dept Automat, Beijing, Peoples R China
来源
2022 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS) | 2022年
基金
北京市自然科学基金;
关键词
Ultrasound simulation; Generative Adversarial Network; Deep Learning; ULTRASOUND;
D O I
10.1109/IUS54386.2022.9957451
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Deep learning algorithms have been widely applied in the field of medical image analysis. However, it is very difficult and time-consuming to obtain a large number of accurate samples and their corresponding labels. Deep neural networks can be difficult to train without a large number of samples, which may lead to unsatisfactory performance. Currently, synthesizing medical image samples is an effective solution to this problem. One of the commonly adopted methods for synthesizing labeled ultrasound images is simulation-based approach, such as Field II software. Although these simulation-based methods have demonstrated acceptable performance, they are not only time-consuming, but also require expert experience to fine-tune parameters. Therefore, we propose a Cyc1eGAN-based method to synthesize annotated ultrasound images from computed tomography (CT) images. Our proposed method is validated on the left ventricle segmentation task. Experimental analyses demonstrate that our proposed annotated sample generation method outperforms the general Cyc1eGAN method in the context of segmentation results.
引用
收藏
页数:3
相关论文
共 50 条
  • [21] Soft Contrastive Cross-Modal Retrieval
    Song, Jiayu
    Hu, Yuxuan
    Zhu, Lei
    Zhang, Chengyuan
    Zhang, Jian
    Zhang, Shichao
    APPLIED SCIENCES-BASEL, 2024, 14 (05):
  • [22] Cross-modal deep discriminant analysis
    Dai, Xue-mei
    Li, Sheng-Gang
    NEUROCOMPUTING, 2018, 314 : 437 - 444
  • [23] Cross-modal Retrieval with Correspondence Autoencoder
    Feng, Fangxiang
    Wang, Xiaojie
    Li, Ruifan
    PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14), 2014, : 7 - 16
  • [24] TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation
    Li, Qingyun
    Yu, Zhibin
    Wang, Yubo
    Zheng, Haiyong
    SENSORS, 2020, 20 (15) : 1 - 16
  • [25] Correspondence Autoencoders for Cross-Modal Retrieval
    Feng, Fangxiang
    Wang, Xiaojie
    Li, Ruifan
    Ahmad, Ibrar
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2015, 12 (01)
  • [26] A cross-modal deep learning method for enhancing photovoltaic power forecasting with satellite imagery and time series data
    Wang, Kai
    Shan, Shuo
    Dou, Weijing
    Wei, Haikun
    Zhang, Kanjian
    ENERGY CONVERSION AND MANAGEMENT, 2025, 323
  • [27] Cross-modal attention for multi-modal image registration
    Song, Xinrui
    Chao, Hanqing
    Xu, Xuanang
    Guo, Hengtao
    Xu, Sheng
    Turkbey, Baris
    Wood, Bradford J.
    Sanford, Thomas
    Wang, Ge
    Yan, Pingkun
    MEDICAL IMAGE ANALYSIS, 2022, 82
  • [28] Cross-Modal PET Synthesis Method Based on Improved Edge-Aware Generative Adversarial Network
    Lei, Liting
    Zhang, Rui
    Zhang, Haifei
    Li, Xiujing
    Zou, Yuchao
    Aldosary, Saad
    Hassanein, Azza S.
    JOURNAL OF NANOELECTRONICS AND OPTOELECTRONICS, 2023, 18 (10) : 1184 - 1192
  • [29] Dynamic Cross-Modal Feature Interaction Network for Hyperspectral and LiDAR Data Classification
    Lin, Junyan
    Gao, Feng
    Qi, Lin
    Dong, Junyu
    Du, Qian
    Gao, Xinbo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
  • [30] Cross-modal Retrieval Based on Stacked Bimodal Auto-Encoder
    Qiang, Bao-hua
    Zhao, Tian
    Wang, Yu-feng
    Tao, Lin
    Xie, Wu
    Zheng, Hong
    Chen, Xian-jun
    2019 ELEVENTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI 2019), 2019, : 256 - 261