Image Optimisation and Perception, Discipline of Medical Imaging and Radiation Sciences, Faculty of Health Sciences, University of Sydney, Sydney, NSW, Australia.
Image Optimisation and Perception, Discipline of Medical Imaging and Radiation Sciences, Faculty of Health Sciences, University of Sydney, Sydney, NSW, Australia.
Artif Intell Med. 2018 Jun;88:14-24. doi: 10.1016/j.artmed.2018.04.005. Epub 2018 Apr 26.
Identifying carcinoma subtype can help to select appropriate treatment options and determining the subtype of benign lesions can be beneficial to estimate the patients' risk of developing cancer in the future. Pathologists' assessment of lesion subtypes is considered as the gold standard, however, sometimes strong disagreements among pathologists for distinction among lesion subtypes have been previously reported in the literature.
To propose a framework for classifying hematoxylin-eosin stained breast digital slides either as benign or cancer, and then categorizing cancer and benign cases into four different subtypes each.
We used data from a publicly available database (BreakHis) of 81 patients where each patient had images at four magnification factors (×40, ×100, ×200, and ×400) available, for a total of 7786 images. The proposed framework, called MuDeRN (MUlti-category classification of breast histopathological image using DEep Residual Networks) consisted of two stages. In the first stage, for each magnification factor, a deep residual network (ResNet) with 152 layers has been trained for classifying patches from the images as benign or malignant. In the next stage, the images classified as malignant were subdivided into four cancer subcategories and those categorized as benign were classified into four subtypes. Finally, the diagnosis for each patient was made by combining outputs of ResNets' processed images in different magnification factors using a meta-decision tree.
For the malignant/benign classification of images, MuDeRN's first stage achieved correct classification rates (CCR) of 98.52%, 97.90%, 98.33%, and 97.66% in ×40, ×100, ×200, and ×400 magnification factors respectively. For eight-class categorization of images based on the output of MuDeRN's both stages, CCRs in four magnification factors were 95.40%, 94.90%, 95.70%, and 94.60%. Finally, for making patient-level diagnosis, MuDeRN achieved a CCR of 96.25% for eight-class categorization.
MuDeRN can be helpful in the categorization of breast lesions.
确定癌亚型有助于选择合适的治疗方案,确定良性病变的亚型有助于评估患者未来患癌的风险。病理学家对病变亚型的评估被认为是金标准,然而,文献中曾报道过病理学家在区分病变亚型方面存在强烈分歧的情况。
提出一种框架,用于将苏木精-伊红染色的乳腺数字幻灯片分类为良性或癌症,然后将癌症和良性病例分为四种不同的亚型。
我们使用了一个公开可用的数据库(BreakHis)中的数据,该数据库包含 81 名患者的图像,每个患者的四个放大倍数(×40、×100、×200 和×400)都有图像,总共有 7786 张图像。所提出的框架称为 MuDeRN(使用深度残差网络对乳腺组织病理学图像进行多类别分类),由两个阶段组成。在第一阶段,对于每个放大倍数,使用具有 152 层的深度残差网络(ResNet)来对图像中的斑块进行分类,分为良性或恶性。在下一阶段,将分类为恶性的图像细分为四种癌症亚型,将分类为良性的图像分为四种亚型。最后,通过使用元决策树结合不同放大倍数下 ResNet 处理图像的输出,对每个患者进行诊断。
对于图像的恶性/良性分类,MuDeRN 的第一阶段在×40、×100、×200 和×400 放大倍数下的正确分类率(CCR)分别为 98.52%、97.90%、98.33%和 97.66%。对于基于 MuDeRN 两个阶段输出的八类图像分类,四个放大倍数下的 CCR 分别为 95.40%、94.90%、95.70%和 94.60%。最后,对于患者级别的诊断,MuDeRN 在八类分类中实现了 96.25%的 CCR。
MuDeRN 可有助于乳腺病变的分类。