Udriștoiu Anca Loredana, Podină Nicoleta, Ungureanu Bogdan Silviu, Constantin Alina, Georgescu Claudia Valentina, Bejinariu Nona, Pirici Daniel, Burtea Daniela Elena, Gruionu Lucian, Udriștoiu Stefan, Săftoiu Adrian
Faculty of Automation, Computers and Electronics, University of Craiova, Craiova, Romania.
Department of Gastroenterology, Ponderas Academic Hospital, Bucharest, Romania.
Endosc Ultrasound. 2024 Nov-Dec;13(6):335-344. doi: 10.1097/eus.0000000000000094. Epub 2024 Dec 12.
EUS-guided fine-needle biopsy is the procedure of choice for the diagnosis of pancreatic ductal adenocarcinoma (PDAC). Nevertheless, the samples obtained are small and require expertise in pathology, whereas the diagnosis is difficult in view of the scarcity of malignant cells and the important desmoplastic reaction of these tumors. With the help of artificial intelligence, the deep learning architectures produce a fast, accurate, and automated approach for PDAC image segmentation based on whole-slide imaging. Given the effectiveness of U-Net in semantic segmentation, numerous variants and improvements have emerged, specifically for whole-slide imaging segmentation.
In this study, a comparison of 7 U-Net architecture variants was performed on 2 different datasets of EUS-guided fine-needle biopsy samples from 2 medical centers (31 and 33 whole-slide images, respectively) with different parameters and acquisition tools. The U-Net architecture variants evaluated included some that had not been previously explored for PDAC whole-slide image segmentation. The evaluation of their performance involved calculating accuracy through the mean Dice coefficient and mean intersection over union (IoU).
The highest segmentation accuracies were obtained using Inception U-Net architecture for both datasets. PDAC tissue was segmented with the overall average Dice coefficient of 97.82% and IoU of 0.87 for Dataset 1, respectively, overall average Dice coefficient of 95.70%, and IoU of 0.79 for Dataset 2. Also, we considered the external testing of the trained segmentation models by performing the cross evaluations between the 2 datasets. The Inception U-Net model trained on Train Dataset 1 performed with the overall average Dice coefficient of 93.12% and IoU of 0.74 on Test Dataset 2. The Inception U-Net model trained on Train Dataset 2 performed with the overall average Dice coefficient of 92.09% and IoU of 0.81 on Test Dataset 1.
The findings of this study demonstrated the feasibility of utilizing artificial intelligence for assessing PDAC segmentation in whole-slide imaging, supported by promising scores.
超声内镜引导下细针穿刺活检是诊断胰腺导管腺癌(PDAC)的首选方法。然而,获取的样本较小,需要病理学专业知识,而且鉴于恶性细胞数量稀少以及这些肿瘤显著的促纤维增生反应,诊断难度较大。借助人工智能,深度学习架构基于全切片成像为PDAC图像分割提供了一种快速、准确且自动化的方法。鉴于U-Net在语义分割方面的有效性,已经出现了许多变体和改进,特别是针对全切片成像分割。
在本研究中,对来自2个医疗中心的2个不同的超声内镜引导下细针穿刺活检样本数据集(分别为31张和33张全切片图像),使用不同参数和采集工具,对7种U-Net架构变体进行了比较。评估的U-Net架构变体包括一些之前未用于PDAC全切片图像分割研究的变体。对它们性能的评估包括通过平均Dice系数和平均交并比(IoU)计算准确率。
两个数据集使用Inception U-Net架构均获得了最高的分割准确率。对于数据集1,PDAC组织分割的总体平均Dice系数分别为97.82%,IoU为0.87;对于数据集2,总体平均Dice系数为95.70%,IoU为0.79。此外,我们通过在两个数据集之间进行交叉评估来考虑对训练好的分割模型进行外部测试。在训练数据集1上训练的Inception U-Net模型在测试数据集2上的总体平均Dice系数为93.12%,IoU为0.74。在训练数据集2上训练的Inception U-Net模型在测试数据集1上的总体平均Dice系数为92.09%,IoU为0.81。
本研究结果证明了利用人工智能评估全切片成像中PDAC分割的可行性,所获分数令人满意。