Suppr超能文献

通过迁移学习在多切片模拟乳腺 CT 图像上实现基于卷积神经网络的理想模型观察者的策略。

Strategy to implement a convolutional neural network based ideal model observer via transfer learning for multi-slice simulated breast CT images.

机构信息

School of Integrated Technology, Yonsei University, Republic of Korea.

Department of Artificial Intelligence, Yonsei University, Republic of Korea.

出版信息

Phys Med Biol. 2023 May 30;68(11). doi: 10.1088/1361-6560/acd222.

Abstract

In this work, we propose a convolutional neural network (CNN)-based multi-slice ideal model observer using transfer learning (TL-CNN) to reduce the required number of training samples.To train model observers, we generate simulated breast CT image volumes that are reconstructed using the FeldkampDavisKress algorithm with a ramp and Hanning-weighted ramp filter. The observer performance is evaluated on the background-known-statistically (BKS)/signal-known-exactly task with a spherical signal, and the BKS/signal-known-statistically task with random signal generated by the stochastic grown method. We compare the detectability of the CNN-based model observer with that of conventional linear model observers for multi-slice images (i.e. a multi-slice channelized Hotelling observer (CHO) and volumetric CHO). We also analyze the detectability of the TL-CNN for different numbers of training samples to examine its performance robustness to a limited number of training samples. To further analyze the effectiveness of transfer learning, we calculate the correlation coefficients of filter weights in the CNN-based multi-slice model observer.When using transfer learning for the CNN-based multi-slice ideal model observer, the TL-CNN provides the same performance with a 91.7% reduction in the number of training samples compared to that when transfer learning is not used. Moreover, compared to the conventional linear model observer, the proposed CNN-based multi-slice model observers achieve 45% higher detectability in the signal-known-statistically detection tasks and 13% higher detectability in the SKE detection tasks. In correlation coefficient analysis, it is observed that the filters in most of the layers are highly correlated, demonstrating the effectiveness of the transfer learning for multi-slice model observer training.Deep learning-based model observers require large numbers of training samples, and the required number of training samples increases as the dimensions of the image (i.e. the number of slices) increase. With applying transfer learning, the required number of training samples is significantly reduced without performance drop.

摘要

在这项工作中,我们提出了一种基于卷积神经网络(CNN)的多切片理想模型观察者,使用迁移学习(TL-CNN)来减少所需的训练样本数量。为了训练模型观察者,我们生成使用Feldkamp-Davis-Kress 算法重建的模拟乳房 CT 图像体积,该算法带有斜坡和汉宁加权斜坡滤波器。使用球形信号在背景已知统计(BKS)/信号已知精确(SKE)任务和使用随机生长方法生成的随机信号的 BKS/信号已知统计任务上评估观察者性能。我们将基于 CNN 的模型观察者的可检测性与传统的线性模型观察者(即多切片通道化霍特林观察者(CHO)和体积 CHO)的可检测性进行比较。我们还分析了 TL-CNN 在不同数量的训练样本下的可检测性,以检查其在有限数量的训练样本下的性能稳健性。为了进一步分析迁移学习的有效性,我们计算了基于 CNN 的多切片模型观察者中滤波器权重的相关系数。当使用迁移学习进行基于 CNN 的多切片理想模型观察者时,与不使用迁移学习相比,TL-CNN 提供了相同的性能,训练样本数量减少了 91.7%。此外,与传统的线性模型观察者相比,所提出的基于 CNN 的多切片模型观察者在信号已知统计检测任务中实现了 45%的更高可检测性,在 SKE 检测任务中实现了 13%的更高可检测性。在相关系数分析中,观察到大多数层中的滤波器高度相关,这表明迁移学习对于多切片模型观察者训练是有效的。基于深度学习的模型观察者需要大量的训练样本,并且随着图像的维度(即切片的数量)增加,所需的训练样本数量也会增加。通过应用迁移学习,可以在不降低性能的情况下显著减少所需的训练样本数量。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验