Bidirectional brain image translation using transfer learning from generic pre-trained models

被引:0
|
作者
Haimour, Fatima [1 ]
Al-Sayyed, Rizik [2 ]
Mahafza, Waleed [3 ]
Al-Kadi, Omar S. [2 ]
机构
[1] Zarqa Univ, Fac Informat Technol, Zarqa 13110, Jordan
[2] Univ Jordan, King Abdullah 2 Sch Informat Technol, Amman 11942, Jordan
[3] Jordan Univ Hosp, Dept Diagnost Radiol, Amman 11942, Jordan
关键词
Image translation; Transfer learning; Pre-trained model; Brain tumor; Magnetic resonance imaging; Computed tomography; CycleGAN;
D O I
10.1016/j.cviu.2024.104100
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Brain imaging plays a crucial role in the diagnosis and treatment of various neurological disorders, providing valuable insights into the structure and function of the brain. Techniques such as magnetic resonance imaging (MRI) and computed tomography (CT) enable non-invasive visualization of the brain, aiding in the understanding of brain anatomy, abnormalities, and functional connectivity. However, cost and radiation dose may limit the acquisition of specific image modalities, so medical image synthesis can be used to generate required medical images without actual addition. CycleGAN and other GANs are valuable tools for generating synthetic images across various fields. In the medical domain, where obtaining labeled medical images is labor-intensive and expensive, addressing data scarcity is a major challenge. Recent studies propose using transfer learning to overcome this issue. This involves adapting pre-trained CycleGAN models, initially trained on non-medical data, to generate realistic medical images. In this work, transfer learning was applied to the task of MR-CT image translation and vice versa using 18 pre-trained non-medical models, and the models were fine-tuned to have the best result. The models' performance was evaluated using four widely used image quality metrics: Peak-signal-to-noise-ratio, Structural Similarity Index, Universal Quality Index, and Visual Information Fidelity. Quantitative evaluation and qualitative perceptual analysis by radiologists demonstrate the potential of transfer learning in medical imaging and the effectiveness of the generic pre-trained model. The results provide compelling evidence of the model's exceptional performance, which can be attributed to the high quality and similarity of the training images to actual human brain images. These results underscore the significance of carefully selecting appropriate and representative training images to optimize performance in brain image analysis tasks.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Facial age estimation using pre-trained CNN and transfer learning
    Issam Dagher
    Dany Barbara
    Multimedia Tools and Applications, 2021, 80 : 20369 - 20380
  • [22] Comparison of Pre-Trained CNNs for Audio Classification Using Transfer Learning
    Tsalera, Eleni
    Papadakis, Andreas
    Samarakou, Maria
    JOURNAL OF SENSOR AND ACTUATOR NETWORKS, 2021, 10 (04)
  • [23] A Performance Comparison of Pre-trained Deep Learning Models to Classify Brain Tumor
    Diker, Aykut
    IEEE EUROCON 2021 - 19TH INTERNATIONAL CONFERENCE ON SMART TECHNOLOGIES, 2021, : 246 - 249
  • [24] Facial age estimation using pre-trained CNN and transfer learning
    Dagher, Issam
    Barbara, Dany
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (13) : 20369 - 20380
  • [25] Detection of Alzheimer's disease using pre-trained deep learning models through transfer learning: a review
    Heenaye-Mamode Khan, Maleika
    Reesaul, Pushtika
    Auzine, Muhammad Muzzammil
    Taylor, Amelia
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (10)
  • [26] Bi-tuning: Efficient Transfer from Pre-trained Models
    Zhong, Jincheng
    Ma, Haoyu
    Wang, Ximei
    Kou, Zhi
    Long, Mingsheng
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 357 - 373
  • [27] Enhancement of Pre-Trained Deep Learning Models to Improve Brain Tumor Classification
    Ullah Z.
    Odeh A.
    Khattak I.
    Hasan M.A.
    Informatica (Slovenia), 2023, 47 (06): : 165 - 172
  • [28] Deep Fusing Pre-trained Models into Neural Machine Translation
    Weng, Rongxiang
    Yu, Heng
    Luo, Weihua
    Zhang, Min
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11468 - 11476
  • [29] Mass detection in mammograms using pre-trained deep learning models
    Agarwal, Richa
    Diaz, Oliver
    Llado, Xavier
    Marti, Robert
    14TH INTERNATIONAL WORKSHOP ON BREAST IMAGING (IWBI 2018), 2018, 10718
  • [30] Backdoor Pre-trained Models Can Transfer to All
    Shen, Lujia
    Ji, Shouling
    Zhang, Xuhong
    Li, Jinfeng
    Chen, Jing
    Shi, Jie
    Fang, Chengfang
    Yin, Jianwei
    Wang, Ting
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 3141 - 3158