Institute of Computer Science, Johannes Gutenberg University, Mainz, Germany.
Computer Vision & Mixed Reality Group, RheinMain University of Applied Sciences, Wiesbaden, Germany.
Med Phys. 2023 Jun;50(6):3511-3525. doi: 10.1002/mp.16347. Epub 2023 Apr 11.
Patient motions are a repeatedly reported phenomenon in oral and maxillofacial cone beam CT scans, leading to reconstructions of limited usability. In certain cases, independent movements of the mandible induce unpredictable motion patterns. Previous motion correction methods are not able to handle such complex cases of patient movements.
Our goal was to design a combined motion estimation and motion correction approach for separate cranial and mandibular motions, solely based on the 2D projection images from a single scan.
Our iterative three-step motion correction algorithm models the two articulated motions as independent rigid motions. First of all, we segment cranium and mandible in the projection images using a deep neural network. Next, we compute a 3D reconstruction with the poses of the object's trajectories fixed. Third, we improve all poses by minimizing the projection error while keeping the reconstruction fixed. Step two and three are repeated alternately.
We find that our marker-free approach delivers reconstructions of up to 85% higher quality, with respect to the projection error, and can improve on already existing techniques, which model only a single rigid motion. We show results of both synthetic and real data created in different scenarios. The reconstruction of motion parameters in a real environment was evaluated on acquisitions of a skull mounted on a hexapod, creating a realistic, easily reproducible motion profile.
The proposed algorithm consistently enhances the visual quality of motion impaired cone beam computed tomography scans, thus eliminating the need for a re-scan in certain cases, considerably lowering radiation dosage for the patient. It can flexibly be used with differently sized regions of interest and is even applicable to local tomography.
患者运动是口腔颌面锥形束 CT 扫描中反复报道的现象,导致重建的可用性有限。在某些情况下,下颌的独立运动引起不可预测的运动模式。以前的运动校正方法无法处理这种复杂的患者运动情况。
我们的目标是设计一种基于单次扫描的 2D 投影图像,用于单独的颅部和下颌运动的组合运动估计和运动校正方法。
我们的迭代三步运动校正算法将两个铰接运动建模为独立的刚体运动。首先,我们使用深度神经网络在投影图像中分割颅骨和下颌。接下来,我们计算具有固定对象轨迹位置的 3D 重建。第三,我们通过最小化投影误差同时保持重建固定来改进所有姿势。第二和第三步交替重复。
我们发现,我们的无标记方法在投影误差方面提供了高达 85%的重建质量提升,并且可以改进仅建模单个刚体运动的现有技术。我们展示了不同场景中合成和真实数据的结果。在安装在六足动物上的颅骨的实际环境中的运动参数重建的评估是在创建逼真的、易于复制的运动轮廓的采集上进行的。
该算法一致地提高了运动受损锥形束 CT 扫描的视觉质量,因此在某些情况下消除了重新扫描的需要,显著降低了患者的辐射剂量。它可以灵活地用于不同大小的感兴趣区域,甚至适用于局部断层扫描。