Suppr超能文献

用于光学相干断层扫描中光衰减系数深度分辨估计的通用模型。

General model for depth-resolved estimation of the optical attenuation coefficients in optical coherence tomography.

机构信息

Instituto Cientifico e Tecnologico, Universidade Brasil, São Paulo, Brazil.

Universidade de Sao Paulo USP - IPEN - CNEN/SP, Instituto de Pesquisas Energeticas e Nucleares, São Paulo, Brazil.

出版信息

J Biophotonics. 2019 Oct;12(10):e201800402. doi: 10.1002/jbio.201800402. Epub 2019 Jul 2.

Abstract

We present the proof of concept of a general model that uses the tissue sample transmittance as input to estimate the depth-resolved attenuation coefficient of tissue samples using optical coherence tomography (OCT). This method allows us to obtain an image of tissue optical properties instead of intensity contrast, guiding diagnosis and tissues differentiation, extending its application from thick to thin samples. The performance of our method was simulated and tested with the assistance of a home built single-layered and multilayered phantoms (~100 μm each layer) with known attenuation coefficient on the range of 0.9 to 2.32 mm . It is shown that the estimated depth-resolved attenuation coefficient recovers the reference values, measured by using an integrating sphere followed by the inverse adding doubling processing technique. That was corroborated for all situations when the correct transmittance value is used with an average difference of 7%. Finally, we applied the proposed method to estimate the depth-resolved attenuation coefficient for a thin biological sample, demonstrating the ability of our method on real OCT images.

摘要

我们提出了一个通用模型的概念验证,该模型使用组织样本的透过率作为输入,利用光相干断层扫描(OCT)来估计组织样本的深度分辨率衰减系数。这种方法使我们能够获得组织光学性质的图像,而不是强度对比,从而指导诊断和组织分化,将其应用从厚样本扩展到薄样本。我们的方法的性能是通过使用自建的单层和多层(每层约 100μm)的具有已知衰减系数的(0.9 到 2.32mm)的幻影来模拟和测试的。结果表明,所估计的深度分辨率衰减系数可以恢复参考值,这些参考值是通过使用积分球并采用反向添加加倍处理技术来测量的。当使用正确的透过率值时,在所有情况下都得到了验证,平均差异为 7%。最后,我们将提出的方法应用于估计薄生物样本的深度分辨率衰减系数,在真实的 OCT 图像上证明了我们的方法的能力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验