Machine Intelligence in Medicine and Imaging (MI ∙2) Lab, Mayo Clinic, Phoenix, AZ, USA.
Department of Radiology, Emory University, Atlanta, GA, USA.
J Biomed Semantics. 2022 Feb 23;13(1):8. doi: 10.1186/s13326-022-00262-8.
Transfer learning is a common practice in image classification with deep learning where the available data is often limited for training a complex model with millions of parameters. However, transferring language models requires special attention since cross-domain vocabularies (e.g. between two different modalities MR and US) do not always overlap as the pixel intensity range overlaps mostly for images.
We present a concept of similar domain adaptation where we transfer inter-institutional language models (context-dependent and context-independent) between two different modalities (ultrasound and MRI) to capture liver abnormalities.
We use MR and US screening exam reports for hepatocellular carcinoma as the use-case and apply the transfer language space strategy to automatically label imaging exams with and without structured template with > 0.9 average f1-score.
We conclude that transfer learning along with fine-tuning the discriminative model is often more effective for performing shared targeted tasks than the training for a language space from scratch.
在深度学习的图像分类中,迁移学习是一种常见做法,因为可用数据通常有限,无法训练具有数百万个参数的复杂模型。然而,由于跨域词汇(例如,在两种不同模式 MR 和 US 之间)并不总是重叠,因为像素强度范围主要重叠于图像,因此需要特别注意迁移语言模型。
我们提出了一种类似域自适应的概念,我们在两种不同模式(超声和 MRI)之间转移机构间的语言模型(上下文相关和上下文无关),以捕捉肝脏异常。
我们使用肝细胞癌的 MR 和 US 筛查检查报告作为用例,并应用迁移语言空间策略,以自动标记具有和不具有结构化模板的成像检查,平均 f1 分数>0.9。
我们得出结论,与从头开始训练语言空间相比,迁移学习以及微调判别模型通常更有效地执行共享目标任务。