Noree Sakonporn, Quinones Robles Willmer Rafell, Ko Young Sin, Yi Mun Yong
Graduate School of Data Science, Department of Industrial and System Engineering, Korea Advanced Institute of Science and Technology, Deajeon, South Korea.
Pathology Center, Seegene Medical Foundation, Seoul, South Korea.
BMC Med Imaging. 2025 Jul 1;25(1):230. doi: 10.1186/s12880-025-01760-8.
Accurate classification of histopathological whole slide images (WSIs) is essential for cancer diagnosis and treatment planning. Conventional WSI creation involves slicing a biopsy tissue into multiple slices, placing them on a single glass slide, and digitally scanning them. While deep learning approaches have shown promise in WSI analysis, they mostly overlook potential common patterns across different slices of the original tissue.
We propose a novel technique that leverages inter-slice commonality to enhance classification performance. Our method constructs graphs for each tissue slice, extracts relevant features, and connects these graphs based on spatial relationships and feature similarities, creating a comprehensive representation of the entire tissue sample, which is then used for WSI classification using graph convolutional networks.
We validated our approach using stomach and colorectal WSI datasets. The results demonstrate that having the information of commonalities across slices significantly improves graph-based WSI classification models. Notably, our method outperforms existing multiple instance learning approaches in terms of both accuracy (from 87.9% to 91.5% for the stomach dataset and from 88.3% to 91.2% for the colorectal dataset), and AUROC (from 96.8% to 98.8% for the stomach dataset and from 97.3% to 98.2% for the colorectal dataset).
By efficiently establishing information across slices, our approach offers a more accurate and efficient method for WSI classification, with promising implications for clinical applications. The source code is available at https://github.com/Juckjick/commonality_graph .
Not applicable.
组织病理学全切片图像(WSIs)的准确分类对于癌症诊断和治疗规划至关重要。传统的WSI创建方法包括将活检组织切成多个切片,放置在单个载玻片上,然后进行数字扫描。虽然深度学习方法在WSI分析中显示出了前景,但它们大多忽略了原始组织不同切片之间潜在的共同模式。
我们提出了一种利用切片间共性来提高分类性能的新技术。我们的方法为每个组织切片构建图,提取相关特征,并基于空间关系和特征相似性连接这些图,从而创建整个组织样本的综合表示,然后使用图卷积网络将其用于WSI分类。
我们使用胃和结肠直肠癌的WSI数据集验证了我们的方法。结果表明,包含切片间共性的信息显著提高了基于图的WSI分类模型。值得注意的是,我们的方法在准确率(胃数据集从87.9%提高到91.5%,结肠直肠癌数据集从88.3%提高到91.2%)和曲线下面积(胃数据集从96.8%提高到98.8%,结肠直肠癌数据集从97.3%提高到98.2%)方面均优于现有的多实例学习方法。
通过有效地建立切片间的信息,我们的方法为WSI分类提供了一种更准确、高效的方法,对临床应用具有潜在的意义。源代码可在https://github.com/Juckjick/commonality_graph获取。
不适用。