Suppr超能文献

基于物理感知深度学习网络的高分辨率无透镜全息显微镜。

High-resolution lensless holographic microscopy using a physics-aware deep network.

机构信息

Indian Institute of Technology Hyderabad, Department of Biomedical Engineering, Medical Optics and Sensors Laboratory, Hyderabad, Telangana, India.

出版信息

J Biomed Opt. 2024 Oct;29(10):106502. doi: 10.1117/1.JBO.29.10.106502. Epub 2024 Oct 8.

Abstract

SIGNIFICANCE

Lensless digital inline holographic microscopy (LDIHM) is an emerging quantitative phase imaging modality that uses advanced computational methods for phase retrieval from the interference pattern. The existing end-to-end deep networks require a large training dataset with sufficient diversity to achieve high-fidelity hologram reconstruction. To mitigate this data requirement problem, physics-aware deep networks integrate the physics of holography in the loss function to reconstruct complex objects without needing prior training. However, the data fidelity term measures the data consistency with a single low-resolution hologram without any external regularization, which results in a low performance on complex biological data.

AIM

We aim to mitigate the challenges with trained and physics-aware untrained deep networks separately and combine the benefits of both methods for high-resolution phase recovery from a single low-resolution hologram in LDIHM.

APPROACH

We propose a hybrid deep framework (HDPhysNet) using a plug-and-play method that blends the benefits of trained and untrained deep models for phase recovery in LDIHM. The high-resolution phase is generated by a pre-trained high-definition generative adversarial network (HDGAN) from a single low-resolution hologram. The generated phase is then plugged into the loss function of a physics-aware untrained deep network to regulate the complex object reconstruction process.

RESULTS

Simulation results show that the SSIM of the proposed method is increased by 0.07 over the trained and 0.04 over the untrained deep networks. The average phase-SNR is elevated by 8.2 dB over trained deep models and 9.8 dB over untrained deep networks on the experimental biological cells (cervical cells and red blood cells).

CONCLUSIONS

We showed improved performance of the HDPhysNet against the unknown perturbation in the imaging parameters such as the propagation distance, the wavelength of the illuminating source, and the imaging sample compared with the trained network (HDGAN). LDIHM, combined with HDPhysNet, is a portable and technology-driven microscopy best suited for point-of-care cytology applications.

摘要

意义

无透镜数字在线全息显微镜 (LDIHM) 是一种新兴的定量相位成像模式,它使用先进的计算方法从干涉图案中恢复相位。现有的端到端深度网络需要具有足够多样性的大型训练数据集才能实现高保真度的全息图重建。为了缓解这个数据需求问题,物理感知深度网络在损失函数中整合了全息术的物理原理,以便在无需预先训练的情况下重建复杂物体。然而,数据一致性项仅通过单个低分辨率全息图来衡量数据一致性,没有任何外部正则化,这导致在复杂生物数据上的性能较低。

目的

我们旨在分别缓解经过训练和未经训练的物理感知深度网络的挑战,并将这两种方法的优势结合起来,以便从 LDIHM 中的单个低分辨率全息图中实现高分辨率相位恢复。

方法

我们提出了一种使用即插即用方法的混合深度框架 (HDPhysNet),该方法融合了经过训练和未经训练的深度模型在 LDIHM 中进行相位恢复的优势。高分辨率相位由单个低分辨率全息图的预训练高清晰度生成对抗网络 (HDGAN) 生成。然后,生成的相位被插入到物理感知未经训练的深度网络的损失函数中,以调节复杂物体重建过程。

结果

模拟结果表明,与经过训练和未经训练的深度网络相比,该方法的 SSIM 分别提高了 0.07 和 0.04。与经过训练的深度模型相比,该方法的平均相位 SNR 提高了 8.2 dB,与未经训练的深度网络相比提高了 9.8 dB,用于实验生物细胞(宫颈细胞和红细胞)。

结论

与经过训练的网络 (HDGAN) 相比,我们展示了 HDPhysNet 在成像参数(例如传播距离、照明源波长和成像样本)中的未知扰动方面的性能有所提高。与 LDIHM 结合使用的 HDPhysNet 是一种适合于即时细胞学应用的便携式、技术驱动的显微镜。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1b73/11460617/caca28cef35b/JBO-029-106502-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验