Regodić Milovan, Bardosi Zoltan, Freysinger Wolfgang
Medical University of Innsbruck, Department of Otorhinolaryngology, Innsbruck, Austria.
Medical University of Vienna, Department of Radiation Oncology, Vienna, Austria.
J Med Imaging (Bellingham). 2021 Mar;8(2):025002. doi: 10.1117/1.JMI.8.2.025002. Epub 2021 Apr 28.
Automating fiducial detection and localization in the patient's pre-operative images can lead to better registration accuracy, reduced human errors, and shorter intervention time. Most current approaches are optimized for a single marker type, mainly spherical adhesive markers. A fully automated algorithm is proposed and evaluated for screw and spherical titanium fiducials, typically used in high-accurate frameless surgical navigation. The algorithm builds on previous approaches with morphological functions and pose estimation algorithms. A 3D convolutional neural network (CNN) is proposed for the fiducial classification task and evaluated for both traditional closed-set and emerging open-set classifiers. A proposed digital ground-truth experiment, with cone-beam computed tomography (CBCT) imaging software, is performed to determine the localization accuracy of the algorithm. The localized fiducial positions in the CBCT images by the presented algorithm were compared to the actual known positions in the virtual phantom models. The difference represents the fiducial localization error (FLE). A total of 241 screws, 151 spherical fiducials, and 1550 other structures are identified with the best true positive rate 95.9% for screw and 99.3% for spherical fiducials at 8.7% and 3.4% false positive rate, respectively. The best achieved FLE mean and its standard deviation for a screw and spherical marker are 58 (14) and , respectively. Accurate marker detection and localization were achieved, with spherical fiducials being superior to screws. Large marker volume and smaller voxel size yield significantly smaller FLEs. Attenuating noise by mesh smoothing has a minor effect on FLE. Future work will focus on expanding the CNN for image segmentation.
在患者术前图像中实现基准点检测和定位自动化可提高配准精度、减少人为误差并缩短干预时间。当前大多数方法是针对单一标记类型进行优化的,主要是球形粘性标记。本文提出并评估了一种针对螺钉和球形钛基准点的全自动算法,这些基准点通常用于高精度的无框架手术导航。该算法基于先前具有形态学功能和姿态估计算法的方法构建。提出了一种用于基准点分类任务的三维卷积神经网络(CNN),并针对传统的封闭集和新兴的开放集分类器进行了评估。利用锥束计算机断层扫描(CBCT)成像软件进行了一项数字真值实验,以确定该算法的定位精度。将所提出算法在CBCT图像中定位的基准点位置与虚拟体模模型中的实际已知位置进行比较。两者的差异即为基准点定位误差(FLE)。总共识别出241个螺钉、151个球形基准点和1550个其他结构,螺钉的最佳真阳性率为95.9%,球形基准点的最佳真阳性率为99.3%,假阳性率分别为8.7%和3.4%。螺钉和球形标记的最佳FLE均值及其标准差分别为58(14)和 。实现了精确的标记检测和定位,球形基准点优于螺钉。较大的标记体积和较小的体素大小可显著减小FLE。通过网格平滑衰减噪声对FLE的影响较小。未来的工作将集中于扩展CNN用于图像分割。