Suppr超能文献

深度学习从原始超声通道数据的单个输入中获取同时的图像和分割输出。

Deep Learning to Obtain Simultaneous Image and Segmentation Outputs From a Single Input of Raw Ultrasound Channel Data.

出版信息

IEEE Trans Ultrason Ferroelectr Freq Control. 2020 Dec;67(12):2493-2509. doi: 10.1109/TUFFC.2020.2993779. Epub 2020 Nov 24.

Abstract

Single plane wave transmissions are promising for automated imaging tasks requiring high ultrasound frame rates over an extended field of view. However, a single plane wave insonification typically produces suboptimal image quality. To address this limitation, we are exploring the use of deep neural networks (DNNs) as an alternative to delay-and-sum (DAS) beamforming. The objectives of this work are to obtain information directly from raw channel data and to simultaneously generate both a segmentation map for automated ultrasound tasks and a corresponding ultrasound B-mode image for interpretable supervision of the automation. We focus on visualizing and segmenting anechoic targets surrounded by tissue and ignoring or deemphasizing less important surrounding structures. DNNs trained with Field II simulations were tested with simulated, experimental phantom, and in vivo data sets that were not included during training. With unfocused input channel data (i.e., prior to the application of receive time delays), simulated, experimental phantom, and in vivo test data sets achieved mean ± standard deviation Dice similarity coefficients of 0.92 ± 0.13, 0.92 ± 0.03, and 0.77 ± 0.07, respectively, and generalized contrast-to-noise ratios (gCNRs) of 0.95 ± 0.08, 0.93 ± 0.08, and 0.75 ± 0.14, respectively. With subaperture beamformed channel data and a modification to the input layer of the DNN architecture to accept these data, the fidelity of image reconstruction increased (e.g., mean gCNR of multiple acquisitions of two in vivo breast cysts ranged 0.89-0.96), but DNN display frame rates were reduced from 395 to 287 Hz. Overall, the DNNs successfully translated feature representations learned from simulated data to phantom and in vivo data, which is promising for this novel approach to simultaneous ultrasound image formation and segmentation.

摘要

单平面波发射在需要高超声帧率和扩展视场的自动化成像任务中很有前景。然而,单个平面波照射通常会产生次优的图像质量。为了解决这个限制,我们正在探索使用深度神经网络(DNN)作为延迟和求和(DAS)波束形成的替代方法。这项工作的目的是直接从原始通道数据中获取信息,并同时生成用于自动化超声任务的分割图以及用于自动化可解释监督的相应超声 B 模式图像。我们专注于可视化和分割被组织包围的无声目标,同时忽略或淡化不太重要的周围结构。使用 Field II 模拟训练的 DNN 与未包含在训练中的模拟、实验体模和体内数据集进行了测试。使用未聚焦的输入通道数据(即在应用接收时间延迟之前),模拟、实验体模和体内测试数据集分别实现了 0.92 ± 0.13、0.92 ± 0.03 和 0.77 ± 0.07 的平均 ± 标准偏差 Dice 相似系数,以及 0.95 ± 0.08、0.93 ± 0.08 和 0.75 ± 0.14 的广义对比噪声比(gCNR)。使用子孔径波束形成通道数据和对 DNN 架构的输入层进行修改以接受这些数据,图像重建的保真度得到了提高(例如,两个体内乳腺囊肿的多次采集的平均 gCNR 范围为 0.89-0.96),但 DNN 显示帧率从 395 降低到 287 Hz。总的来说,DNN 成功地将从模拟数据中学到的特征表示转化为体模和体内数据,这为这种新的同时进行超声图像形成和分割的方法提供了希望。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea30/7990652/4307c28f2cfb/nihms-1649524-f0006.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验