Suppr超能文献

基于深度学习的光学相干断层扫描中的概率体积散斑抑制

Probabilistic volumetric speckle suppression in OCT using deep learning.

作者信息

Chintada Bhaskara Rao, Ruiz-Lopera Sebastián, Restrepo René, Bouma Brett E, Villiger Martin, Uribe-Patarroyo Néstor

机构信息

Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, MA 02114, USA.

Harvard Medical School, Boston, MA 02115, USA.

出版信息

Biomed Opt Express. 2024 Jul 3;15(8):4453-4469. doi: 10.1364/BOE.523716. eCollection 2024 Aug 1.

Abstract

We present a deep learning framework for volumetric speckle reduction in optical coherence tomography (OCT) based on a conditional generative adversarial network (cGAN) that leverages the volumetric nature of OCT data. In order to utilize the volumetric nature of OCT data, our network takes partial OCT volumes as input, resulting in artifact-free despeckled volumes that exhibit excellent speckle reduction and resolution preservation in all three dimensions. Furthermore, we address the ongoing challenge of generating ground truth data for supervised speckle suppression deep learning frameworks by using volumetric non-local means despeckling-TNode- to generate training data. We show that, while TNode processing is computationally demanding, it serves as a convenient, accessible gold-standard source for training data; our cGAN replicates efficient suppression of speckle while preserving tissue structures with dimensions approaching the system resolution of non-local means despeckling while being two orders of magnitude faster than TNode. We demonstrate fast, effective, and high-quality despeckling of the proposed network in different tissue types that are not part of the training. This was achieved with training data composed of just three OCT volumes and demonstrated in three different OCT systems. The open-source nature of our work facilitates re-training and deployment in any OCT system with an all-software implementation, working around the challenge of generating high-quality, speckle-free training data.

摘要

我们提出了一种基于条件生成对抗网络(cGAN)的深度学习框架,用于光学相干断层扫描(OCT)中的体积散斑减少,该框架利用了OCT数据的体积特性。为了利用OCT数据的体积特性,我们的网络将部分OCT体积作为输入,从而生成无伪影的去斑体积,在所有三个维度上都表现出出色的散斑减少和分辨率保留。此外,我们通过使用体积非局部均值去斑-TNode来生成训练数据,解决了为监督散斑抑制深度学习框架生成真实数据这一持续存在的挑战。我们表明,虽然TNode处理在计算上要求很高,但它是训练数据方便、可获取的金标准来源;我们的cGAN在保留组织结构的同时,能高效抑制散斑,其尺寸接近非局部均值去斑的系统分辨率,且比TNode快两个数量级。我们在不属于训练集的不同组织类型中展示了所提出网络的快速、有效和高质量去斑效果。这是通过仅由三个OCT体积组成的训练数据实现的,并在三个不同的OCT系统中得到了验证。我们工作的开源性质便于在任何OCT系统中进行重新训练和部署,通过全软件实现,解决了生成高质量、无散斑训练数据的挑战。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2eb7/11427188/bd49f6fbe1b5/boe-15-8-4453-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验