Bidirectional brain image translation using transfer learning from generic pre-trained models

被引:0
|
作者
Haimour, Fatima [1 ]
Al-Sayyed, Rizik [2 ]
Mahafza, Waleed [3 ]
Al-Kadi, Omar S. [2 ]
机构
[1] Zarqa Univ, Fac Informat Technol, Zarqa 13110, Jordan
[2] Univ Jordan, King Abdullah 2 Sch Informat Technol, Amman 11942, Jordan
[3] Jordan Univ Hosp, Dept Diagnost Radiol, Amman 11942, Jordan
关键词
Image translation; Transfer learning; Pre-trained model; Brain tumor; Magnetic resonance imaging; Computed tomography; CycleGAN;
D O I
10.1016/j.cviu.2024.104100
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Brain imaging plays a crucial role in the diagnosis and treatment of various neurological disorders, providing valuable insights into the structure and function of the brain. Techniques such as magnetic resonance imaging (MRI) and computed tomography (CT) enable non-invasive visualization of the brain, aiding in the understanding of brain anatomy, abnormalities, and functional connectivity. However, cost and radiation dose may limit the acquisition of specific image modalities, so medical image synthesis can be used to generate required medical images without actual addition. CycleGAN and other GANs are valuable tools for generating synthetic images across various fields. In the medical domain, where obtaining labeled medical images is labor-intensive and expensive, addressing data scarcity is a major challenge. Recent studies propose using transfer learning to overcome this issue. This involves adapting pre-trained CycleGAN models, initially trained on non-medical data, to generate realistic medical images. In this work, transfer learning was applied to the task of MR-CT image translation and vice versa using 18 pre-trained non-medical models, and the models were fine-tuned to have the best result. The models' performance was evaluated using four widely used image quality metrics: Peak-signal-to-noise-ratio, Structural Similarity Index, Universal Quality Index, and Visual Information Fidelity. Quantitative evaluation and qualitative perceptual analysis by radiologists demonstrate the potential of transfer learning in medical imaging and the effectiveness of the generic pre-trained model. The results provide compelling evidence of the model's exceptional performance, which can be attributed to the high quality and similarity of the training images to actual human brain images. These results underscore the significance of carefully selecting appropriate and representative training images to optimize performance in brain image analysis tasks.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Pre-trained deep learning models for brain MRI image classification
    Krishnapriya, Srigiri
    Karuna, Yepuganti
    FRONTIERS IN HUMAN NEUROSCIENCE, 2023, 17
  • [2] Classification of Regional Food Using Pre-Trained Transfer Learning Models
    Gadhiya, Jeet
    Khatik, Anjali
    Kodinariya, Shruti
    Ramoliya, Dipak
    7th International Conference on Electronics, Communication and Aerospace Technology, ICECA 2023 - Proceedings, 2023, : 1237 - 1241
  • [3] Transfer learning with pre-trained conditional generative models
    Yamaguchi, Shin'ya
    Kanai, Sekitoshi
    Kumagai, Atsutoshi
    Chijiwa, Daiki
    Kashima, Hisashi
    MACHINE LEARNING, 2025, 114 (04)
  • [4] Towards Inadequately Pre-trained Models in Transfer Learning
    Deng, Andong
    Li, Xingjian
    Hu, Di
    Wang, Tianyang
    Xiong, Haoyi
    Xu, Cheng-Zhong
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 19340 - 19351
  • [5] Comparison of Pre-Trained Models Using Transfer Learning for Detecting Plant Disease
    Chellapandi, Bincy
    Vijayalakshmi, M.
    Chopra, Shalu
    2021 IEEE INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION, AND INTELLIGENT SYSTEMS (ICCCIS), 2021, : 383 - 387
  • [6] Classification of MR Brain Images for Detection of Tumor with Transfer Learning from Pre-trained CNN Models
    Prakash, R. Meena
    Kumari, R. Shantha Selva
    2019 INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET 2019): ADVANCING WIRELESS AND MOBILE COMMUNICATIONS TECHNOLOGIES FOR 2020 INFORMATION SOCIETY, 2019, : 508 - 511
  • [7] Meta Distant Transfer Learning for Pre-trained Language Models
    Wang, Chengyu
    Pan, Haojie
    Qiu, Minghui
    Yang, Fei
    Huang, Jun
    Zhang, Yin
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 9742 - 9752
  • [8] LogME: Practical Assessment of Pre-trained Models for Transfer Learning
    You, Kaichao
    Liu, Yong
    Wang, Jianmin
    Long, Mingsheng
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [9] Reinforced Curriculum Learning on Pre-Trained Neural Machine Translation Models
    Zhao, Mingjun
    Wu, Haijiang
    Niu, Di
    Wang, Xiaoli
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9652 - 9659
  • [10] Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models
    Shu, Yang
    Cao, Zhangjie
    Zhang, Ziyang
    Wang, Jianmin
    Long, Mingsheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,