Wiebe Mitchell, Haston Christina, Lamey Michael, Narayan Apurva, Rajapakshe Rasika
University of British Columbia, Okanagan Campus, Kelowna, BC, Canada.
BJR Open. 2023 Aug 15;5(1):20230008. doi: 10.1259/bjro.20230008. eCollection 2023.
The microscopic analysis of biopsied lung nodules represents the gold-standard for definitive diagnosis of lung cancer. Deep learning has achieved pathologist-level classification of non-small cell lung cancer histopathology images at high resolutions (0.5-2 µm/px), and recent studies have revealed tomography-histology relationships at lower spatial resolutions. Thus, we tested whether patterns for histological classification of lung cancer could be detected at spatial resolutions such as those offered by ultra-high-resolution CT.
We investigated the performance of a deep convolutional neural network (inception-v3) to classify lung histopathology images at lower spatial resolutions than that of typical pathology. Models were trained on 2167 histopathology slides from The Cancer Genome Atlas to differentiate between lung cancer tissues (adenocarcinoma (LUAD) and squamous-cell carcinoma (LUSC)), and normal dense tissue. Slides were accessed at 2.5 × magnification (4 µm/px) and reduced resolutions of 8, 16, 32, 64, and 128 µm/px were simulated by applying digital low-pass filters.
The classifier achieved area under the curve ≥0.95 for all classes at spatial resolutions of 4-16 µm/px, and area under the curve ≥0.95 for differentiating normal tissue from the two cancer types at 128 µm/px.
Features for tissue classification by deep learning exist at spatial resolutions below what is typically viewed by pathologists.
We demonstrated that a deep convolutional network could differentiate normal and cancerous lung tissue at spatial resolutions as low as 128 µm/px and LUAD, LUSC, and normal tissue as low as 16 µm/px. Our data, and results of tomography-histology studies, indicate that these patterns should also be detectable within tomographic data at these resolutions.
对活检肺结节进行显微镜分析是肺癌确诊的金标准。深度学习已在高分辨率(0.5 - 2微米/像素)下实现了非小细胞肺癌组织病理学图像的病理学家级分类,并且最近的研究揭示了较低空间分辨率下的断层扫描 - 组织学关系。因此,我们测试了在超高分辨率CT所提供的空间分辨率下,是否能够检测到肺癌组织学分类的模式。
我们研究了深度卷积神经网络(Inception-v3)在低于典型病理学空间分辨率下对肺组织病理学图像进行分类的性能。模型在来自癌症基因组图谱的2167张组织病理学切片上进行训练,以区分肺癌组织(腺癌(LUAD)和鳞状细胞癌(LUSC))以及正常致密组织。切片以2.5倍放大倍数(4微米/像素)获取,并通过应用数字低通滤波器模拟8、16、32、64和128微米/像素的降低分辨率。
在4 - 16微米/像素的空间分辨率下,分类器对所有类别实现了曲线下面积≥0.95,在128微米/像素下区分正常组织与两种癌症类型时曲线下面积≥0.95。
深度学习进行组织分类的特征存在于病理学家通常观察的空间分辨率以下。
我们证明了深度卷积网络能够在低至128微米/像素的空间分辨率下区分正常和癌性肺组织,在低至16微米/像素下区分LUAD、LUSC和正常组织。我们的数据以及断层扫描 - 组织学研究的结果表明,在这些分辨率下,这些模式在断层扫描数据中也应是可检测的。