Wang Qiang, Hopgood James R, Finlayson Neil, Williams Gareth O S, Fernandes Susan, Williams Elvira, Akram Ahsan, Dhaliwal Kevin, Vallejo Marta
Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:1891-1894. doi: 10.1109/EMBC44109.2020.9175598.
Fluorescence lifetime is effective in discriminating cancerous tissue from normal tissue, but conventional discrimination methods are primarily based on statistical approaches in collaboration with prior knowledge. This paper investigates the application of deep convolutional neural networks (CNNs) for automatic differentiation of ex-vivo human lung cancer via fluorescence lifetime imaging. Around 70,000 fluorescence images from ex-vivo lung tissue of 14 patients were collected by a custom fibre-based fluorescence lifetime imaging endomicroscope. Five state-of-the-art CNN models, namely ResNet, ResNeXt, Inception, Xception, and DenseNet, were trained and tested to derive quantitative results using accuracy, precision, recall, and the area under receiver operating characteristic curve (AUC) as the metrics. The CNNs were firstly evaluated on lifetime images. Since fluorescence lifetime is independent of intensity, further experiments were conducted by stacking intensity and lifetime images together as the input to the CNNs. As the original CNNs were implemented for RGB images, two strategies were applied. One was retaining the CNNs by putting intensity and lifetime images in two different channels and leaving the remaining channel blank. The other was adapting the CNNs for two-channel input. Quantitative results demonstrate that the selected CNNs are considerably superior to conventional machine learning algorithms. Combining intensity and lifetime images introduces noticeable performance gain compared with using lifetime images alone. In addition, the CNNs with intensity-lifetime RGB image is comparable to the modified two-channel CNNs with intensity-lifetime two-channel input for accuracy and AUC, but significantly better for precision and recall.
荧光寿命在区分癌组织和正常组织方面很有效,但传统的区分方法主要基于与先验知识相结合的统计方法。本文研究了深度卷积神经网络(CNN)在通过荧光寿命成像对离体人肺癌进行自动区分中的应用。通过定制的基于光纤的荧光寿命成像内镜收集了14名患者离体肺组织的约70000张荧光图像。使用ResNet、ResNeXt、Inception、Xception和DenseNet这五种最先进的CNN模型进行训练和测试,以准确率、精确率、召回率和受试者操作特征曲线下面积(AUC)作为指标得出定量结果。首先在寿命图像上对CNN进行评估。由于荧光寿命与强度无关,因此通过将强度图像和寿命图像堆叠在一起作为CNN的输入进行了进一步实验。由于原始的CNN是为RGB图像实现的,因此应用了两种策略。一种是将强度图像和寿命图像放入两个不同的通道,其余通道留白,以此保留CNN。另一种是使CNN适用于双通道输入。定量结果表明,所选的CNN明显优于传统的机器学习算法。与单独使用寿命图像相比,将强度图像和寿命图像结合使用可带来显著的性能提升。此外,具有强度 - 寿命RGB图像的CNN在准确率和AUC方面与具有强度 - 寿命双通道输入的改进型双通道CNN相当,但在精确率和召回率方面明显更好。