Suppr超能文献

基于卷积神经网络的荧光寿命成像显微镜预测用于组织中的细胞检测与分类

Fluorescence lifetime image microscopy prediction with convolutional neural networks for cell detection and classification in tissues.

作者信息

Smolen Justin A, Wooley Karen L

机构信息

Departments of Chemistry, Chemical Engineering, and Materials Science and Engineering, Texas A&M University, College Station, TX 77842, USA.

出版信息

PNAS Nexus. 2022 Oct 14;1(5):pgac235. doi: 10.1093/pnasnexus/pgac235. eCollection 2022 Nov.

Abstract

Convolutional neural networks (CNNs) and other deep-learning models have proven to be transformative tools for the automated analysis of microscopy images, particularly in the domain of cellular and tissue imaging. These computer-vision models have primarily been applied with traditional microscopy imaging modalities (e.g. brightfield and fluorescence), likely due to the availability of large datasets in these regimes. However, more advanced microscopy imaging techniques could, potentially, allow for improved model performance in various computational histopathology tasks. In this work, we demonstrate that CNNs can achieve high accuracy in cell detection and classification without large amounts of data when applied to histology images acquired by fluorescence lifetime imaging microscopy (FLIM). This accuracy is higher than what would be achieved with regular single or dual-channel fluorescence images under the same settings, particularly for CNNs pretrained on publicly available fluorescent cell or general image datasets. Additionally, generated FLIM images could be predicted from just the fluorescence image data by using a dense U-Net CNN model trained on a subset of ground-truth FLIM images. These U-Net CNN generated FLIM images demonstrated high similarity to ground truth and improved accuracy in cell detection and classification over fluorescence alone when used as input to a variety of commonly used CNNs. This improved accuracy was maintained even when the FLIM images were generated by a U-Net CNN trained on only a few example FLIM images.

摘要

卷积神经网络(CNN)和其他深度学习模型已被证明是用于显微镜图像自动分析的变革性工具,尤其是在细胞和组织成像领域。这些计算机视觉模型主要应用于传统的显微镜成像模式(如明场和荧光),这可能是因为在这些模式下有大量可用的数据集。然而,更先进的显微镜成像技术可能会在各种计算组织病理学任务中提高模型性能。在这项工作中,我们证明,当将CNN应用于通过荧光寿命成像显微镜(FLIM)获取的组织学图像时,无需大量数据就能在细胞检测和分类中实现高精度。在相同设置下,这种精度高于使用常规单通道或双通道荧光图像所能达到的精度,特别是对于在公开可用的荧光细胞或一般图像数据集上预训练的CNN。此外,通过使用在真实FLIM图像子集上训练的密集U-Net CNN模型,仅从荧光图像数据就能预测生成FLIM图像。这些由U-Net CNN生成的FLIM图像与真实图像高度相似,并且当用作各种常用CNN的输入时,与仅使用荧光相比,在细胞检测和分类方面提高了准确性。即使FLIM图像是由仅在少数示例FLIM图像上训练的U-Net CNN生成的,这种提高的准确性也能保持。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验