Suppr超能文献

通过微调深度卷积神经网络对超声图像中的甲状腺结节进行分类

Thyroid Nodule Classification in Ultrasound Images by Fine-Tuning Deep Convolutional Neural Network.

作者信息

Chi Jianning, Walia Ekta, Babyn Paul, Wang Jimmy, Groot Gary, Eramian Mark

机构信息

Department of Computer Science, University of Saskatchewan, 176 Thorvaldson Bldg, 110 Science Place, Saskatoon, SK, S7N 5C9, Canada.

Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK, S7N 0W8, Canada.

出版信息

J Digit Imaging. 2017 Aug;30(4):477-486. doi: 10.1007/s10278-017-9997-y.

Abstract

With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into "malignant" and "benign" cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.

摘要

随着许多甲状腺结节被偶然发现,在通过细针穿刺活检(FNA)或手术排除那些极有可能为良性的结节的同时,尽可能多地识别出恶性结节非常重要。本文提出了一种用于对超声图像中的甲状腺结节进行分类的计算机辅助诊断(CAD)系统。我们使用深度学习方法从甲状腺超声图像中提取特征。对超声图像进行预处理以校准其尺度并去除伪影。然后使用预处理后的图像样本对预训练的GoogLeNet模型进行微调,从而实现更好的特征提取。将提取的甲状腺超声图像特征发送到成本敏感随机森林分类器,以将图像分类为“恶性”和“良性”病例。实验结果表明,所提出的微调GoogLeNet模型具有出色的分类性能,对于一个开放获取数据库(Pedraza等人,16)中的图像,分类准确率达到98.29%,灵敏度达到99.10%,特异性达到93.90%;而对于我们当地健康区域数据库中的图像,分类准确率为96.34%,灵敏度为86%,特异性为99%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/80ec/5537102/ca594c489961/10278_2017_9997_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验