IEEE Trans Ultrason Ferroelectr Freq Control. 2021 Jul;68(7):2472-2481. doi: 10.1109/TUFFC.2021.3068377. Epub 2021 Jul 5.
Ultrasound elasticity imaging in soft tissue with acoustic radiation force requires the estimation of displacements, typically on the order of several microns, from serially acquired raw data A-lines. In this work, we implement a fully convolutional neural network (CNN) for ultrasound displacement estimation. We present a novel method for generating ultrasound training data, in which synthetic 3-D displacement volumes with a combination of randomly seeded ellipsoids are created and used to displace scatterers, from which simulated ultrasonic imaging is performed using Field II. Network performance was tested on these virtual displacement volumes, as well as an experimental ARFI phantom data set and a human in vivo prostate ARFI data set. In the simulated data, the proposed neural network performed comparably to Loupas's algorithm, a conventional phase-based displacement estimation algorithm; the rms error was [Formula: see text] for the CNN and 0.73 [Formula: see text] for Loupas. Similarly, in the phantom data, the contrast-to-noise ratio (CNR) of a stiff inclusion was 2.27 for the CNN-estimated image and 2.21 for the Loupas-estimated image. Applying the trained network to in vivo data enabled the visualization of prostate cancer and prostate anatomy. The proposed training method provided 26 000 training cases, which allowed robust network training. The CNN had a computation time that was comparable to Loupas's algorithm; further refinements to the network architecture may provide an improvement in the computation time. We conclude that deep neural network-based displacement estimation from ultrasonic data is feasible, providing comparable performance with respect to both accuracy and speed compared to current standard time-delay estimation approaches.
软组织超声弹性成像是利用声辐射力来实现的,这需要从连续采集的原始数据 A 线中估计位移,通常在几个微米的量级。在这项工作中,我们实现了一种用于超声位移估计的全卷积神经网络(CNN)。我们提出了一种新的生成超声训练数据的方法,其中创建了具有随机种子的椭球体组合的合成 3D 位移体,并使用 Field II 对散射体进行位移,从而进行模拟超声成像。网络性能在这些虚拟位移体上进行了测试,同时还在一个实验性的 ARFI 体模数据集和一个人体前列腺 ARFI 数据集上进行了测试。在模拟数据中,所提出的神经网络与 Loupas 的算法(一种基于相位的传统位移估计算法)性能相当;rms 误差对于 CNN 为[公式:见文本],对于 Loupas 为 0.73[公式:见文本]。类似地,在体模数据中,对于 CNN 估计的图像,刚性包含物的对比度噪声比(CNR)为 2.27,对于 Loupas 估计的图像为 2.21。将训练好的网络应用于体内数据,能够可视化前列腺癌和前列腺解剖结构。所提出的训练方法提供了 26000 个训练案例,这使得网络能够进行稳健的训练。CNN 的计算时间与 Loupas 的算法相当;进一步改进网络架构可能会提高计算时间的效率。我们得出结论,基于深度神经网络的超声数据位移估计是可行的,与当前的时延估计方法相比,在准确性和速度方面都具有相当的性能。