Wang Shuai, Xu Xiaojuan, Du Huiqian, Chen Yan, Mei Wenbo
School of Information and Electronics, Beijing Institute of Technology, Beijing, China.
Department of Diagnostic Imaging, National Cancer Center, National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking, Union Medical College, Beijing, China.
Med Phys. 2023 Jan;50(1):297-310. doi: 10.1002/mp.15937. Epub 2022 Aug 29.
It is challenging for radiologists and gynecologists to identify the type of ovarian lesions by reading magnetic resonance (MR) images. Recently developed convolutional neural networks (CNNs) have made great progress in computer vision, but their architectures still need modification if they are used in processing medical images. This study aims to improve the feature extraction capability of CNNs, thus promoting the diagnostic performance in discriminating between benign and malignant ovarian lesions.
We introduce a feature fusion architecture and insert the attention models in the neural network. The features extracted from different middle layers are integrated with reoptimized spatial and channel weights. We add a loss function to constrain the additional probability vector generated from the integrated features, thus guiding the middle layers to emphasize useful information. We analyzed 159 lesions imaged by dynamic contrast-enhanced MR imaging (DCE-MRI), including 73 benign lesions and 86 malignant lesions. Senior radiologists selected and labeled the tumor regions based on the pathology reports. Then, the tumor regions were cropped into 7494 nonoverlapping image patches for training and testing. The type of a single tumor was determined by the average probability scores of the image patches belonging to it.
We implemented fivefold cross-validation to characterize our proposed method, and the distribution of performance matrics was reported. For all the test image patches, the average accuracy of our method is 70.5% with an average area under the curve (AUC) of 0.785, while the baseline is 69.4% and 0.773, and for the diagnosis of single tumors, our model achieved an average accuracy of 82.4% and average AUC of 0.916, which were better than the baseline (81.8% and 0.899). Moreover, we evaluated the performance of our proposed method utilizing different CNN backbones and different attention mechanisms.
The texture features extracted from different middle layers are crucial for ovarian lesion diagnosis. Our proposed method can enhance the feature extraction capabilities of different layers of the network, thereby improving diagnostic performance.
放射科医生和妇科医生通过阅读磁共振(MR)图像来识别卵巢病变类型具有挑战性。最近开发的卷积神经网络(CNN)在计算机视觉方面取得了很大进展,但如果将其用于处理医学图像,其架构仍需改进。本研究旨在提高CNN的特征提取能力,从而提升鉴别卵巢良恶性病变的诊断性能。
我们引入了一种特征融合架构,并在神经网络中插入注意力模型。从不同中间层提取的特征通过重新优化的空间和通道权重进行整合。我们添加了一个损失函数来约束从整合特征生成的额外概率向量,从而引导中间层强调有用信息。我们分析了159个通过动态对比增强磁共振成像(DCE-MRI)成像的病变,包括73个良性病变和86个恶性病变。资深放射科医生根据病理报告选择并标记肿瘤区域。然后,将肿瘤区域裁剪成7494个不重叠的图像块用于训练和测试。单个肿瘤的类型由属于它的图像块的平均概率分数确定。
我们实施了五折交叉验证来表征我们提出的方法,并报告了性能指标的分布。对于所有测试图像块,我们方法的平均准确率为70.5%,曲线下面积(AUC)平均为0.785,而基线分别为69.4%和0.773,对于单个肿瘤的诊断,我们的模型平均准确率为82.4%,平均AUC为0.916,均优于基线(81.8%和0.899)。此外,我们利用不同的CNN主干和不同的注意力机制评估了我们提出的方法的性能。
从不同中间层提取的纹理特征对卵巢病变诊断至关重要。我们提出的方法可以增强网络不同层的特征提取能力,从而提高诊断性能。