Lu Jingfeng, Millioz Fabien, Varray Francois, Poree Jonathan, Provost Jean, Bernard Olivier, Garcia Damien, Friboulet Denis
IEEE Trans Ultrason Ferroelectr Freq Control. 2023 Dec;70(12):1761-1772. doi: 10.1109/TUFFC.2023.3326377. Epub 2023 Dec 14.
High-quality ultrafast ultrasound imaging is based on coherent compounding from multiple transmissions of plane waves (PW) or diverging waves (DW). However, compounding results in reduced frame rate, as well as destructive interferences from high-velocity tissue motion if motion compensation (MoCo) is not considered. While many studies have recently shown the interest of deep learning for the reconstruction of high-quality static images from PW or DW, its ability to achieve such performance while maintaining the capability of tracking cardiac motion has yet to be assessed. In this article, we addressed such issue by deploying a complex-weighted convolutional neural network (CNN) for image reconstruction and a state-of-the-art speckle-tracking method. The evaluation of this approach was first performed by designing an adapted simulation framework, which provides specific reference data, i.e., high-quality, motion artifact-free cardiac images. The obtained results showed that, while using only three DWs as input, the CNN-based approach yielded an image quality and a motion accuracy equivalent to those obtained by compounding 31 DWs free of motion artifacts. The performance was then further evaluated on nonsimulated, experimental in vitro data, using a spinning disk phantom. This experiment demonstrated that our approach yielded high-quality image reconstruction and motion estimation, under a large range of velocities and outperforms a state-of-the-art MoCo-based approach at high velocities. Our method was finally assessed on in vivo datasets and showed consistent improvement in image quality and motion estimation compared to standard compounding. This demonstrates the feasibility and effectiveness of deep learning reconstruction for ultrafast speckle-tracking echocardiography.
高质量的超快超声成像基于平面波(PW)或发散波(DW)多次发射的相干复合。然而,复合会导致帧率降低,并且如果不考虑运动补偿(MoCo),高速组织运动会产生相消干涉。虽然最近许多研究表明深度学习对于从PW或DW重建高质量静态图像很有意义,但其在保持跟踪心脏运动能力的同时实现这种性能的能力尚未得到评估。在本文中,我们通过部署一个用于图像重建的复加权卷积神经网络(CNN)和一种先进的斑点跟踪方法来解决这个问题。首先通过设计一个适配的模拟框架来评估这种方法,该框架提供特定的参考数据,即高质量、无运动伪影的心脏图像。获得的结果表明,仅使用三个DW作为输入时,基于CNN的方法产生的图像质量和运动精度与通过复合31个无运动伪影的DW所获得的相当。然后使用旋转盘体模在非模拟的体外实验数据上进一步评估性能。该实验表明,我们的方法在大范围速度下产生了高质量的图像重建和运动估计,并且在高速时优于一种基于MoCo的先进方法。我们的方法最终在体内数据集上进行了评估,结果表明与标准复合相比,在图像质量和运动估计方面有持续的改进。这证明了深度学习重建用于超快斑点跟踪超声心动图的可行性和有效性。