Transfer language space with similar domain adaptation: a case study with hepatocellular carcinoma
被引:2
|
作者:
Tariq, Amara
论文数: 0引用数: 0
h-index: 0
机构:
Mayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USAMayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USA
Tariq, Amara
[1
]
Kallas, Omar
论文数: 0引用数: 0
h-index: 0
机构:
Emory Univ, Dept Radiol, Atlanta, GA 30322 USAMayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USA
Kallas, Omar
[2
]
Balthazar, Patricia
论文数: 0引用数: 0
h-index: 0
机构:
Emory Univ, Dept Radiol, Atlanta, GA 30322 USAMayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USA
Balthazar, Patricia
[2
]
Lee, Scott Jeffery
论文数: 0引用数: 0
h-index: 0
机构:
Emory Univ, Dept Radiol, Atlanta, GA 30322 USAMayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USA
Lee, Scott Jeffery
[2
]
Desser, Terry
论文数: 0引用数: 0
h-index: 0
机构:
Stanford Univ, Dept Radiol, Palo Alto, CA 94304 USAMayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USA
Desser, Terry
[3
]
论文数: 引用数:
h-index:
机构:
Rubin, Daniel
[3
,4
]
Gichoya, Judy Wawira
论文数: 0引用数: 0
h-index: 0
机构:
Mayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USAMayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USA
Gichoya, Judy Wawira
[1
]
Banerjee, Imon
论文数: 0引用数: 0
h-index: 0
机构:
Mayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USAMayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USA
Banerjee, Imon
[1
]
机构:
[1] Mayo Clin, Machine Intelligence Med & Imaging MI 2 Lab, Phoenix, AZ 85054 USA
[2] Emory Univ, Dept Radiol, Atlanta, GA 30322 USA
[3] Stanford Univ, Dept Radiol, Palo Alto, CA 94304 USA
[4] Stanford Univ, Dept Biomed Data Sci, Palo Alto, CA USA
Transfer learning;
Language model;
Radiology report;
BERT;
Word2vec;
D O I:
10.1186/s13326-022-00262-8
中图分类号:
Q [生物科学];
学科分类号:
07 ;
0710 ;
09 ;
摘要:
Background Transfer learning is a common practice in image classification with deep learning where the available data is often limited for training a complex model with millions of parameters. However, transferring language models requires special attention since cross-domain vocabularies (e.g. between two different modalities MR and US) do not always overlap as the pixel intensity range overlaps mostly for images. Method We present a concept of similar domain adaptation where we transfer inter-institutional language models (context-dependent and context-independent) between two different modalities (ultrasound and MRI) to capture liver abnormalities. Results We use MR and US screening exam reports for hepatocellular carcinoma as the use-case and apply the transfer language space strategy to automatically label imaging exams with and without structured template with > 0.9 average f1-score. Conclusion We conclude that transfer learning along with fine-tuning the discriminative model is often more effective for performing shared targeted tasks than the training for a language space from scratch.