• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有物理增强网络和等变的实时相位成像。

Real-time phase imaging with physics-enhanced network and equivariance.

出版信息

Opt Lett. 2023 May 15;48(10):2732-2735. doi: 10.1364/OL.487150.

DOI:10.1364/OL.487150
PMID:37186752
Abstract

Learning-based phase imaging balances high fidelity and speed. However, supervised training requires unmistakable and large-scale datasets, which are often hard or impossible to obtain. Here, we propose an architecture for real-time phase imaging based on physics-enhanced network and equivariance (PEPI). The measurement consistency and equivariant consistency of physical diffraction images are used to optimize the network parameters and invert the process from a single diffraction pattern. In addition, we propose a regularization method based total variation kernel (TV-K) function constraint to output more texture details and high-frequency information. The results show that PEPI can produce the object phase quickly and accurately, and the proposed learning strategy performs closely to the fully supervised method in the evaluation function. Moreover, the PEPI solution can handle high-frequency details better than the fully supervised method. The reconstruction results validate the robustness and generalization ability of the proposed method. Specially, our results show that PEPI leads to considerable performance improvement on the imaging inverse problem, thereby paving the way for high-precision unsupervised phase imaging.

摘要

基于学习的相位成像兼顾高保真度和速度。然而,监督式训练需要明确且大规模的数据集,而这些数据集往往难以获取或无法获取。在这里,我们提出了一种基于物理增强网络和等变性(PEPI)的实时相位成像架构。物理衍射图像的测量一致性和等变性一致性被用来优化网络参数,并从单个衍射模式中反转过程。此外,我们提出了一种基于全变差核(TV-K)函数约束的正则化方法,以输出更多的纹理细节和高频信息。结果表明,PEPI 可以快速准确地生成物体相位,所提出的学习策略在评估函数上与完全监督方法非常接近。此外,PEPI 解可以比完全监督方法更好地处理高频细节。重建结果验证了该方法的稳健性和泛化能力。特别地,我们的结果表明,PEPI 可以显著提高成像反问题的性能,从而为高精度的无监督相位成像铺平了道路。

相似文献

1
Real-time phase imaging with physics-enhanced network and equivariance.具有物理增强网络和等变的实时相位成像。
Opt Lett. 2023 May 15;48(10):2732-2735. doi: 10.1364/OL.487150.
2
Physics-based supervised learning method for high dynamic range 3D measurement with high fidelity.
Opt Lett. 2024 Feb 1;49(3):602-605. doi: 10.1364/OL.506775.
3
Multi-mask self-supervised learning for physics-guided neural networks in highly accelerated magnetic resonance imaging.多模态自监督学习在高加速磁共振成像中物理引导神经网络的应用。
NMR Biomed. 2022 Dec;35(12):e4798. doi: 10.1002/nbm.4798. Epub 2022 Jul 17.
4
Sampling Equivariant Self-Attention Networks for Object Detection in Aerial Images.用于航空图像目标检测的采样等变自注意力网络
IEEE Trans Image Process. 2023;32:6413-6425. doi: 10.1109/TIP.2023.3327586. Epub 2023 Nov 28.
5
Physics constrained unsupervised deep learning for rapid, high resolution scanning coherent diffraction reconstruction.用于快速、高分辨率扫描相干衍射重建的物理约束无监督深度学习
Sci Rep. 2023 Dec 21;13(1):22789. doi: 10.1038/s41598-023-48351-7.
6
Dynamic coherent diffractive imaging with a physics-driven untrained learning method.基于物理驱动的非训练学习方法的动态相干衍射成像
Opt Express. 2021 Sep 27;29(20):31426-31442. doi: 10.1364/OE.433507.
7
A regularization-driven Mean Teacher model based on semi-supervised learning for medical image segmentation.基于半监督学习的正则化引导均值教师模型在医学图像分割中的应用。
Phys Med Biol. 2022 Aug 30;67(17). doi: 10.1088/1361-6560/ac89c8.
8
Coherent modulation imaging using a physics-driven neural network.基于物理驱动神经网络的相干调制成像。
Opt Express. 2022 Sep 26;30(20):35647-35662. doi: 10.1364/OE.472083.
9
Dual-constrained physics-enhanced untrained neural network for lensless imaging.用于无透镜成像的双约束物理增强未训练神经网络。
J Opt Soc Am A Opt Image Sci Vis. 2024 Feb 1;41(2):165-173. doi: 10.1364/JOSAA.510147.
10
Self-supervised learning for single-pixel imaging via dual-domain constraints.基于双域约束的单像素成像自监督学习。
Opt Lett. 2023 Apr 1;48(7):1566-1569. doi: 10.1364/OL.483886.