Suppr超能文献

二维超声图像和视频的超分辨率处理。

Super-resolution of 2D ultrasound images and videos.

机构信息

CNR-IMATI, Via De Marini 6, Genova, Italy.

Esaote S.p.A., Via E. Melen 77, Genova, Italy.

出版信息

Med Biol Eng Comput. 2023 Oct;61(10):2511-2526. doi: 10.1007/s11517-023-02818-x. Epub 2023 May 17.

Abstract

This paper proposes a novel deep-learning framework for super-resolution ultrasound images and videos in terms of spatial resolution and line reconstruction. To this end, we up-sample the acquired low-resolution image through a vision-based interpolation method; then, we train a learning-based model to improve the quality of the up-sampling. We qualitatively and quantitatively test our model on different anatomical districts (e.g., cardiac, obstetric) images and with different up-sampling resolutions (i.e., 2X, 4X). Our method improves the PSNR median value with respect to SOTA methods of [Formula: see text] on obstetric 2X raw images, [Formula: see text] on cardiac 2X raw images, and [Formula: see text] on abdominal raw 4X images; it also improves the number of pixels with a low prediction error of [Formula: see text] on obstetric 4X raw images, [Formula: see text] on cardiac 4X raw images, and [Formula: see text] on abdominal 4X raw images. The proposed method is then applied to the spatial super-resolution of 2D videos, by optimising the sampling of lines acquired by the probe in terms of the acquisition frequency. Our method specialises trained networks to predict the high-resolution target through the design of the network architecture and the loss function, taking into account the anatomical district and the up-sampling factor and exploiting a large ultrasound data set. The use of deep learning on large data sets overcomes the limitations of vision-based algorithms that are general and do not encode the characteristics of the data. Furthermore, the data set can be enriched with images selected by medical experts to further specialise the individual networks. Through learning and high-performance computing, the proposed super-resolution is specialised to different anatomical districts by training multiple networks. Furthermore, the computational demand is shifted to centralised hardware resources with a real-time execution of the network's prediction on local devices.

摘要

本文提出了一种新颖的深度学习框架,用于超声图像和视频的空间分辨率和线重建的超分辨率。为此,我们通过基于视觉的插值方法对采集到的低分辨率图像进行上采样;然后,我们训练一个基于学习的模型来提高上采样的质量。我们在不同的解剖区域(如心脏、产科)图像和不同的上采样分辨率(即 2X、4X)上对我们的模型进行了定性和定量测试。与 [公式:见文本] 在产科 2X 原始图像、[公式:见文本]在心脏 2X 原始图像上相比,我们的方法提高了 PSNR 中位数,在腹部原始 4X 图像上提高了[公式:见文本];它还提高了预测误差低的像素数量,在产科 4X 原始图像上提高了[公式:见文本],在心脏 4X 原始图像上提高了[公式:见文本],在腹部 4X 原始图像上提高了[公式:见文本]。然后,该方法应用于 2D 视频的空间超分辨率,通过优化探头在采集频率方面采集的线的采样。我们的方法通过网络架构和损失函数的设计,专门针对高分辨率目标进行训练网络,考虑到解剖区域和上采样因子,并利用大型超声数据集。使用深度学习处理大数据集克服了基于视觉的算法的局限性,这些算法是通用的,不编码数据的特征。此外,可以使用医学专家选择的图像丰富数据集,进一步专门化各个网络。通过学习和高性能计算,通过训练多个网络,针对不同的解剖区域进行了超分辨率的专业化。此外,计算需求转移到集中式硬件资源,在本地设备上实时执行网络预测。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f30/10533602/b535e4488479/11517_2023_2818_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验