Suppr超能文献

非配对数据训练可实现基于低分辨率采集的超分辨率共聚焦显微镜成像。

Unpaired data training enables super-resolution confocal microscopy from low-resolution acquisitions.

作者信息

Trujillo Carlos, Thompson Lauren, Skalli Omar, Doblas Ana

出版信息

Opt Lett. 2024 Oct 15;49(20):5775-5778. doi: 10.1364/OL.537713.

Abstract

Supervised deep-learning models have enabled super-resolution imaging in several microscopic imaging modalities, increasing the spatial lateral bandwidth of the original input images beyond the diffraction limit. Despite their success, their practical application poses several challenges in terms of the amount of training data and its quality, requiring the experimental acquisition of large, paired databases to generate an accurate generalized model whose performance remains invariant to unseen data. Cycle-consistent generative adversarial networks (cycleGANs) are unsupervised models for image-to-image translation tasks that are trained on unpaired datasets. This paper introduces a cycleGAN framework specifically designed to increase the lateral resolution limit in confocal microscopy by training a cycleGAN model using low- and high-resolution unpaired confocal images of human glioblastoma cells. Training and testing performances of the cycleGAN model have been assessed by measuring specific metrics such as background standard deviation, peak-to-noise ratio, and a customized frequency content measure. Our cycleGAN model has been evaluated in terms of image fidelity and resolution improvement using a paired dataset, showing superior performance than other reported methods. This work highlights the efficacy and promise of cycleGAN models in tackling super-resolution microscopic imaging without paired training, paving the path for turning home-built low-resolution microscopic systems into low-cost super-resolution instruments by means of unsupervised deep learning.

摘要

监督式深度学习模型已在多种显微成像模式中实现了超分辨率成像,将原始输入图像的空间横向带宽提高到了衍射极限之外。尽管取得了成功,但它们的实际应用在训练数据量及其质量方面带来了诸多挑战,需要通过实验获取大量配对数据库,以生成一个准确的通用模型,其性能对于未见数据保持不变。循环一致生成对抗网络(CycleGAN)是用于图像到图像转换任务的无监督模型,在未配对数据集上进行训练。本文介绍了一种专门设计的CycleGAN框架,通过使用人类胶质母细胞瘤细胞的低分辨率和高分辨率未配对共聚焦图像训练CycleGAN模型,来提高共聚焦显微镜的横向分辨率极限。通过测量背景标准差、峰值信噪比和定制的频率内容度量等特定指标,评估了CycleGAN模型的训练和测试性能。我们的CycleGAN模型使用配对数据集在图像保真度和分辨率提升方面进行了评估,显示出比其他已报道方法更优越的性能。这项工作突出了CycleGAN模型在无需配对训练的情况下解决超分辨率显微成像问题的有效性和前景,为通过无监督深度学习将自制的低分辨率显微系统转变为低成本超分辨率仪器铺平了道路。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验