Lau Yu Shi, Tan Li Kuo, Chee Kok Han, Chan Chow Khuen, Liew Yih Miin
Faculty of Engineering, Department of Biomedical Engineering, Universiti Malaya, Kuala Lumpur, Malaysia.
Faculty of Medicine, Department of Biomedical Imaging, Universiti Malaya, Kuala Lumpur, Malaysia.
Phys Eng Sci Med. 2025 Mar;48(1):251-271. doi: 10.1007/s13246-024-01509-7. Epub 2025 Jan 6.
Neointimal coverage and stent apposition, as assessed from intravascular optical coherence tomography (IVOCT) images, are crucial for optimizing percutaneous coronary intervention (PCI). Existing state-of-the-art computer algorithms designed to automate this analysis often treat lumen and stent segmentations as separate target entities, applicable only to a single stent type and overlook automation of preselecting which pullback segments need segmentation, thus limit their practicality. This study aimed for an algorithm capable of intelligently handling the entire IVOCT pullback across different phases of PCI and clinical scenarios, including the presence and coexistence of metal and bioresorbable vascular scaffold (BVS), stent types. We propose a multi-task deep learning model, named TriVOCTNet, that automates image classification/selection, lumen segmentation and stent struts segmentation within a single network by integrating classification, regression and pixel-level segmentation models. This approach allowed a single-network, single-pass implementation with all tasks parallelized for speed and convenience. A joint loss function was specifically designed to optimize each task in situations where each task may or may not be present. Evaluation on 4,746 images achieved classification accuracies of 0.999, 0.997, and 0.998 for lumen, BVS, and metal stent features, respectively. The lumen segmentation performance showed a Euclidean distance error of 21.72 μm and Dice's coefficient of 0.985. For BVS struts segmentation, the Dice's coefficient was 0.896, and for metal stent struts segmentation, the precision was 0.895 and sensitivity was 0.868. TriVOCTNet highlights its clinical potential due to its fast and accurate results, and simplicity in handling all tasks and scenarios through a single system.
通过血管内光学相干断层扫描(IVOCT)图像评估的新生内膜覆盖和支架贴壁情况,对于优化经皮冠状动脉介入治疗(PCI)至关重要。现有的旨在实现这种分析自动化的先进计算机算法,通常将管腔和支架分割视为单独的目标实体,仅适用于单一类型的支架,并且忽略了预先选择哪些回撤段需要分割的自动化,因此限制了它们的实用性。本研究旨在开发一种算法,能够智能地处理PCI不同阶段和临床场景中的整个IVOCT回撤,包括金属和生物可吸收血管支架(BVS)的存在及共存情况、支架类型。我们提出了一种名为TriVOCTNet的多任务深度学习模型,该模型通过整合分类、回归和像素级分割模型,在单个网络内实现图像分类/选择、管腔分割和支架支柱分割的自动化。这种方法允许在一个网络中单次实现所有任务并行化,以提高速度和便利性。专门设计了一个联合损失函数,以在每个任务可能存在或不存在的情况下优化每个任务。对4746幅图像的评估显示,管腔、BVS和金属支架特征的分类准确率分别为0.999、0.997和0.998。管腔分割性能的欧几里得距离误差为21.72μm,Dice系数为0.985。对于BVS支柱分割,Dice系数为0.896,对于金属支架支柱分割,精度为0.895,灵敏度为0.868。TriVOCTNet因其快速准确的结果以及通过单个系统处理所有任务和场景的简单性,凸显了其临床潜力。