Suppr超能文献

用于光学相干弹性成像的双卷积神经网络增强应变估计方法

Dual-convolutional neural network-enhanced strain estimation method for optical coherence elastography.

作者信息

Bai Yulei, Zhang Zhanhua, He Zhaoshui, Xie Shengli, Dong Bo

出版信息

Opt Lett. 2024 Feb 1;49(3):438-441. doi: 10.1364/OL.507931.

Abstract

Strain estimation is vital in phase-sensitive optical coherence elastography (PhS-OCE). In this Letter, we introduce a novel, to the best of our knowledge, method to improve strain estimation by using a dual-convolutional neural network (Dual-CNN). This approach requires two sets of PhS-OCE systems: a high-resolution system for high-quality training data and a cost-effective standard-resolution system for practical measurements. During training, high-resolution strain results acquired from the former system and the pre-existing strain estimation CNN serve as label data, while the narrowed light source-acquired standard-resolution phase results act as input data. By training a new network with this data, high-quality strain results can be estimated from standard-resolution PhS-OCE phase results. Comparison experiments show that the proposed Dual-CNN can preserve the strain quality even when the light source bandwidth is reduced by over 80%.

摘要

应变估计在相敏光学相干弹性成像(PhS - OCE)中至关重要。在本信函中,据我们所知,我们介绍了一种通过使用双卷积神经网络(Dual - CNN)来改进应变估计的新颖方法。这种方法需要两套PhS - OCE系统:一套用于获取高质量训练数据的高分辨率系统,以及一套用于实际测量的具有成本效益的标准分辨率系统。在训练期间,从前者系统获取的高分辨率应变结果和预先存在的应变估计卷积神经网络用作标签数据,而通过窄化光源获取的标准分辨率相位结果用作输入数据。通过用这些数据训练一个新网络,可以从标准分辨率的PhS - OCE相位结果中估计出高质量的应变结果。对比实验表明,即使光源带宽减少超过80%,所提出的双卷积神经网络仍能保持应变质量。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验