Suppr超能文献

基于混合多实例学习的全切片图像上胃腺癌分化识别

A hybrid multi-instance learning-based identification of gastric adenocarcinoma differentiation on whole-slide images.

作者信息

Zhang Mudan, Sun Xinhuan, Li Wuchao, Cao Yin, Liu Chen, Tu Guilan, Wang Jian, Wang Rongpin

机构信息

Department of Radiology, Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People's Hospital, No. 83 Zhongshan East Road, Nan Ming District, Guiyang, 550002, Guizhou, China.

Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.

出版信息

Biomed Eng Online. 2025 Jun 25;24(1):79. doi: 10.1186/s12938-025-01407-3.

Abstract

OBJECTIVE

To investigate the potential of a hybrid multi-instance learning model (TGMIL) combining Transformer and graph attention networks for classifying gastric adenocarcinoma differentiation on whole-slide images (WSIs) without manual annotation.

METHODS AND MATERIALS

A hybrid multi-instance learning model is proposed based on the Transformer and the graph attention network, called TGMIL, to classify the differentiation of gastric adenocarcinoma. A total of 613 WSIs from patients with gastric adenocarcinoma were retrospectively collected from two different hospitals. According to the differentiation of gastric adenocarcinoma, the data were divided into four groups: normal group (n = 254), well differentiation group (n = 166), moderately differentiation group (n = 75), and poorly differentiation group (n = 118). The gold standard of differentiation classification was blindly established by two gastrointestinal pathologists. The WSIs were randomly split into a training dataset consisting of 494 images and a testing dataset consisting of 119 images. Within the training set, the WSI count of the normal, well, moderately, and poorly differential groups was 203, 131, 62, and 98 individuals, respectively. Within the test set, the corresponding WSI count was 51, 35, 13, and 20 individuals.

RESULTS

The TGMIL model developed for the differential prediction task exhibited remarkable efficiency when considering sensitivity, specificity, and the area under the curve (AUC) values. We also conducted a comparative analysis to assess the efficiency of five other models, namely MIL, CLAM_SB, CLAM_MB, DSMIL, and TransMIL, in classifying the differentiation of gastric cancer. The TGMIL model achieved a sensitivity of 73.33% and a specificity of 91.11%, with an AUC value of 0.86.

CONCLUSIONS

The hybrid multi-instance learning model TGMIL could accurately classify the differentiation of gastric adenocarcinoma using WSI without the need for labor-intensive and time-consuming manual annotations, which will improve the efficiency and objectivity of diagnosis.

摘要

目的

研究一种结合Transformer和图注意力网络的混合多实例学习模型(TGMIL)在无需人工标注的情况下对全切片图像(WSIs)上的胃腺癌分化进行分类的潜力。

方法和材料

提出了一种基于Transformer和图注意力网络的混合多实例学习模型,称为TGMIL,用于对胃腺癌的分化进行分类。从两家不同医院回顾性收集了613例胃腺癌患者的全切片图像。根据胃腺癌的分化程度,将数据分为四组:正常组(n = 254)、高分化组(n = 166)、中分化组(n = 75)和低分化组(n = 118)。由两名胃肠病理学家盲目建立分化分类的金标准。将全切片图像随机分为由494幅图像组成的训练数据集和由119幅图像组成的测试数据集。在训练集中,正常、高分化、中分化和低分化组的全切片图像数量分别为203、131、62和98例。在测试集中,相应的全切片图像数量分别为51、35、13和20例。

结果

为分化预测任务开发的TGMIL模型在考虑敏感性、特异性和曲线下面积(AUC)值时表现出显著的效率。我们还进行了比较分析,以评估其他五个模型,即MIL、CLAM_SB、CLAM_MB、DSMIL和TransMIL在分类胃癌分化方面的效率。TGMIL模型的敏感性为73.33%,特异性为91.11%,AUC值为0.86。

结论

混合多实例学习模型TGMIL可以使用全切片图像准确地对胃腺癌的分化进行分类,而无需劳动密集型和耗时的人工标注,这将提高诊断的效率和客观性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b456/12199488/143bc63d207a/12938_2025_1407_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验