Suppr超能文献

用于阴道镜检查数据分类和可视化的多模态图神经网络

Multi-Modal Graph Neural Networks for Colposcopy Data Classification and Visualization.

作者信息

Chatterjee Priyadarshini, Siddiqui Shadab, Kareem Razia Sulthana Abdul, Rao Srikanth R

机构信息

Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Hyderabad 500075, Telangana, India.

Old Royal Naval College, University of Greenwich, Park Row, London SE10 9LS, UK.

出版信息

Cancers (Basel). 2025 Apr 30;17(9):1521. doi: 10.3390/cancers17091521.

Abstract

BACKGROUND

Cervical lesion classification is essential for early detection of cervical cancer. While deep learning methods have shown promise, most rely on single-modal data or require extensive manual annotations. This study proposes a novel Graph Neural Network (GNN)-based framework that integrates colposcopy images, segmentation masks, and graph representations for improved lesion classification.

METHODS

We developed a fully connected graph-based architecture using GCNConv layers with global mean pooling and optimized it via grid search. A five-fold cross-validation protocol was employed to evaluate performance before (1-100 epochs) and after fine-tuning (101-151 epochs). Performance metrics included macro-average F1-score and validation accuracy. Visualizations were used for model interpretability.

RESULTS

The model achieved a macro-average F1-score of 89.4% and validation accuracy of 92.1% before fine-tuning, which improved to 94.56% and 98.98%, respectively, after fine-tuning. LIME-based visual explanations validated models focus on discriminative lesion regions.

CONCLUSIONS

This study highlights the potential of graph-based multi-modal learning for cervical lesion analysis. Collaborating with the MNJ Institute of Oncology, the framework shows promise for clinical use.

摘要

背景

宫颈病变分类对于宫颈癌的早期检测至关重要。虽然深度学习方法已显示出前景,但大多数方法依赖单模态数据或需要大量人工标注。本研究提出了一种基于图神经网络(GNN)的新型框架,该框架整合了阴道镜图像、分割掩码和图表示,以改进病变分类。

方法

我们使用带有全局平均池化的GCNConv层开发了一种基于全连接图的架构,并通过网格搜索对其进行了优化。采用五折交叉验证协议来评估微调前(1 - 100个epoch)和微调后(101 - 151个epoch)的性能。性能指标包括宏平均F1分数和验证准确率。可视化用于模型可解释性。

结果

该模型在微调前的宏平均F1分数为89.4%,验证准确率为92.1%,微调后分别提高到94.56%和98.98%。基于LIME的可视化解释验证了模型关注有区分性的病变区域。

结论

本研究突出了基于图的多模态学习在宫颈病变分析中的潜力。与MNJ肿瘤研究所合作,该框架显示出临床应用的前景。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6964/12070989/8be7a11aa597/cancers-17-01521-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验