School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China.
Research Institute for Frontier Science, Beihang University, Beijing, China.
Ultrasound Med Biol. 2023 Oct;49(10):2234-2246. doi: 10.1016/j.ultrasmedbio.2023.07.005. Epub 2023 Aug 4.
Plane-wave imaging (PWI) is a high-frame-rate imaging technique that sacrifices image quality. Deep learning can potentially enhance plane-wave image quality, but processing complex in-phase and quadrature (IQ) data and suppressing incoherent signals pose challenges. To address these challenges, we present a complex transformer network (CTN) that integrates complex convolution and complex self-attention (CSA) modules.
The CTN operates in a four-step process: delaying complex IQ data from a 0° single-angle plane wave for each pixel as CTN input data; extracting reconstruction features with a complex convolution layer; suppressing irrelevant features derived from incoherent signals with two CSA modules; and forming output images with another complex convolution layer. The training labels are generated by minimum variance (MV).
Simulation, phantom and in vivo experiments revealed that CTN produced comparable- or even higher-quality images than MV, but with much shorter computation time. Evaluation metrics included contrast ratio, contrast-to-noise ratio, generalized contrast-to-noise ratio and lateral and axial full width at half-maximum and were -11.59 dB, 1.16, 0.68, 278 μm and 329 μm for simulation, respectively, and 9.87 dB, 0.96, 0.62, 357 μm and 305 μm for the phantom experiment, respectively. In vivo experiments further indicated that CTN could significantly improve details that were previously vague or even invisible in DAS and MV images. And after being accelerated by GPU, the CTN runtime (76.03 ms) was comparable to that of delay-and-sum (DAS, 61.24 ms).
The proposed CTN significantly improved the image contrast, resolution and some unclear details by the MV beamformer, making it an efficient tool for high-frame-rate imaging.
平面波成像(PWI)是一种牺牲图像质量的高帧率成像技术。深度学习有可能提高平面波图像的质量,但处理复杂的同相和正交(IQ)数据并抑制非相干信号是具有挑战性的。为了解决这些挑战,我们提出了一种复杂的变换网络(CTN),它集成了复杂卷积和复杂自注意(CSA)模块。
CTN 分四步进行操作:将每个像素的 0°单角度平面波的复 IQ 数据延迟作为 CTN 的输入数据;使用复卷积层提取重建特征;使用两个 CSA 模块抑制来自非相干信号的不相关特征;使用另一个复卷积层形成输出图像。训练标签由最小方差(MV)生成。
仿真、体模和活体实验表明,CTN 生成的图像质量与 MV 相当,甚至更高,但计算时间要短得多。评估指标包括对比度比、对比度噪声比、广义对比度噪声比以及横向和轴向半最大值全宽,分别为模拟实验中的-11.59dB、1.16、0.68、278μm 和 329μm,体模实验中的 9.87dB、0.96、0.62、357μm 和 305μm。活体实验进一步表明,CTN 可以显著改善 DAS 和 MV 图像中以前模糊或甚至看不见的细节。在 GPU 加速后,CTN 的运行时间(76.03ms)与延迟和求和(DAS,61.24ms)相当。
与 MV 波束形成器相比,所提出的 CTN 显著提高了图像的对比度、分辨率和一些不清晰的细节,使其成为高帧率成像的有效工具。