Zhang Gaobo, Gu Wenting, Yue Yaoting, Tang Meng-Xing, Luo Jianwen, Liu Xin, Ta Dean
IEEE Trans Ultrason Ferroelectr Freq Control. 2024 Dec;71(12: Breaking the Resolution Barrier in Ultrasound):1735-1751. doi: 10.1109/TUFFC.2024.3388102. Epub 2025 Jan 8.
Ultrasound localization microscopy (ULM) overcomes the acoustic diffraction limit by localizing tiny microbubbles (MBs), thus enabling the microvascular to be rendered at subwavelength resolution. Nevertheless, to obtain such superior spatial resolution, it is necessary to spend tens of seconds gathering numerous ultrasound (US) frames to accumulate the MB events required, resulting in ULM imaging still suffering from tradeoffs between imaging quality, data acquisition time, and data processing speed. In this article, we present a new deep learning (DL) framework combining multibranch convolutional neural network (CNN) and recursive transformer (RT), termed ULM-MbCNRT, that is capable of reconstructing a super-resolution (SR) image directly from a temporal mean low-resolution image generated by averaging much fewer raw US frames, i.e., implement an ultrafast ULM imaging. To evaluate the performance of ULM-MbCNRT, a series of numerical simulations and in vivo experiments are carried out. Numerical simulation results indicate that ULM-MbCNRT achieves high-quality ULM imaging with 10-fold reduction in data acquisition time and ~130-fold reduction in computation time compared to the previous DL method (e.g., the modified subpixel CNN, ULM-mSPCN). For the in vivo experiments, when comparing to the ULM-mSPCN, ULM-MbCNRT allows ~37-fold reduction in data acquisition time (0.8 s) and 2134-fold reduction in computation time (0.87 s) without sacrificing spatial resolution. It implies that ultrafast ULM imaging holds promise for observing rapid biological activity in vivo, potentially improving the diagnosis and monitoring of clinical conditions.
超声定位显微镜(ULM)通过定位微小气泡(MBs)克服了声学衍射极限,从而能够以亚波长分辨率呈现微血管。然而,为了获得如此卓越的空间分辨率,需要花费数十秒收集大量超声(US)帧以积累所需的MB事件,这导致ULM成像在成像质量、数据采集时间和数据处理速度之间仍存在权衡。在本文中,我们提出了一种新的深度学习(DL)框架,它结合了多分支卷积神经网络(CNN)和递归变换器(RT),称为ULM-MbCNRT,该框架能够直接从通过平均少得多的原始US帧生成的时间平均低分辨率图像重建超分辨率(SR)图像,即实现超快速ULM成像。为了评估ULM-MbCNRT的性能,进行了一系列数值模拟和体内实验。数值模拟结果表明,与先前的DL方法(例如,改进的亚像素CNN,ULM-mSPCN)相比,ULM-MbCNRT实现了高质量的ULM成像,数据采集时间减少了约10倍,计算时间减少了约130倍。对于体内实验,与ULM-mSPCN相比,ULM-MbCNRT在不牺牲空间分辨率的情况下,数据采集时间减少了约37倍(0.8秒),计算时间减少了约2134倍(0.87秒)。这意味着超快速ULM成像有望在体内观察快速的生物活动,潜在地改善临床病症的诊断和监测。