Tareke Tewele W, Leclerc Sarah, Vuillemin Catherine, Buffier Perrine, Crevisy Elodie, Nguyen Amandine, Monnier Meteau Marie-Paule, Legris Pauline, Angiolini Serge, Lalande Alain
ICMUB Laboratory, UMR CNRS 6302, University of Burgundy, 7 Bld Jeanne d'Arc, 21000 Dijon, France.
Medical Imaging Department, Hospital of Bastia, 20600 Bastia, France.
J Imaging. 2024 Aug 22;10(8):203. doi: 10.3390/jimaging10080203.
In clinical practice, thyroid nodules are typically visually evaluated by expert physicians using 2D ultrasound images. Based on their assessment, a fine needle aspiration (FNA) may be recommended. However, visually classifying thyroid nodules from ultrasound images may lead to unnecessary fine needle aspirations for patients. The aim of this study is to develop an automatic thyroid ultrasound image classification system to prevent unnecessary FNAs.
An automatic computer-aided artificial intelligence system is proposed for classifying thyroid nodules using a fine-tuned deep learning model based on the DenseNet architecture, which incorporates an attention module. The dataset comprises 591 thyroid nodule images categorized based on the Bethesda score. Thyroid nodules are classified as either requiring FNA or not. The challenges encountered in this task include managing variability in image quality, addressing the presence of artifacts in ultrasound image datasets, tackling class imbalance, and ensuring model interpretability. We employed techniques such as data augmentation, class weighting, and gradient-weighted class activation maps (Grad-CAM) to enhance model performance and provide insights into decision making.
Our approach achieved excellent results with an average accuracy of 0.94, F1-score of 0.93, and sensitivity of 0.96. The use of Grad-CAM gives insights on the decision making and then reinforce the reliability of the binary classification for the end-user perspective.
We propose a deep learning architecture that effectively classifies thyroid nodules as requiring FNA or not from ultrasound images. Despite challenges related to image variability, class imbalance, and interpretability, our method demonstrated a high classification accuracy with minimal false negatives, showing its potential to reduce unnecessary FNAs in clinical settings.
在临床实践中,甲状腺结节通常由专家医生使用二维超声图像进行视觉评估。根据他们的评估结果,可能会建议进行细针穿刺抽吸活检(FNA)。然而,从超声图像中对甲状腺结节进行视觉分类可能会导致患者接受不必要的细针穿刺抽吸活检。本研究的目的是开发一种自动甲状腺超声图像分类系统,以避免不必要的FNA。
提出了一种自动计算机辅助人工智能系统,用于基于DenseNet架构的微调深度学习模型对甲状腺结节进行分类,该模型包含一个注意力模块。数据集包括591张根据贝塞斯达分类系统分类的甲状腺结节图像。甲状腺结节被分类为是否需要进行FNA。此任务中遇到的挑战包括处理图像质量的变异性、解决超声图像数据集中伪像的存在、应对类别不平衡以及确保模型的可解释性。我们采用了数据增强、类别加权和梯度加权类激活映射(Grad-CAM)等技术来提高模型性能,并为决策提供见解。
我们的方法取得了优异的结果,平均准确率为0.94,F1分数为0.93,灵敏度为0.96。Grad-CAM的使用为决策提供了见解,并从最终用户的角度增强了二元分类的可靠性。
我们提出了一种深度学习架构,可有效地从超声图像中将甲状腺结节分类为是否需要进行FNA。尽管存在与图像变异性、类别不平衡和可解释性相关的挑战,但我们的方法显示出高分类准确率且假阴性最少,表明其在临床环境中减少不必要FNA的潜力。