Graduate School of Informatics, Nagoya university, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, Japan.
School of Information Science, Aichi Institute of Technology, Yachigusa 1247, Yakusa-cho, Toyota, Aichi, 470-0356, Japan.
Int J Comput Assist Radiol Surg. 2020 Oct;15(10):1619-1630. doi: 10.1007/s11548-020-02241-9. Epub 2020 Aug 7.
Due to the complex anatomical structure of bronchi and the resembling inner surfaces of airway lumina, bronchoscopic examinations require additional 3D navigational information to assist the physicians. A bronchoscopic navigation system provides the position of the endoscope in CT images with augmented anatomical information. To overcome the shortcomings of previous navigation systems, we propose using a technique known as visual simultaneous localization and mapping (SLAM) to improve bronchoscope tracking in navigation systems.
We propose an improved version of the visual SLAM algorithm and use it to estimate nt-specific bronchoscopic video as input. We improve the tracking procedure by adding more narrow criteria in feature matching to avoid mismatches. For validation, we collected several trials of bronchoscopic videos with a bronchoscope camera by exploring synthetic rubber bronchus phantoms. We simulated breath by adding periodic force to deform the phantom. We compared the camera positions from visual SLAM with the manually created ground truth of the camera pose. The number of successfully tracked frames was also compared between the original SLAM and the proposed method.
We successfully tracked 29,559 frames at a speed of 80 ms per frame. This corresponds to 78.1% of all acquired frames. The average root mean square error for our technique was 3.02 mm, while that for the original was 3.61 mm.
We present a novel methodology using visual SLAM for bronchoscope tracking. Our experimental results showed that it is feasible to use visual SLAM for the estimation of the bronchoscope camera pose during bronchoscopic navigation. Our proposed method tracked more frames and showed higher accuracy than the original technique did. Future work will include combining the tracking results with virtual bronchoscopy and validation with in vivo cases.
由于支气管的解剖结构复杂,气道管腔的内表面相似,支气管镜检查需要额外的 3D 导航信息来辅助医生。支气管镜导航系统提供 CT 图像中内窥镜的位置,并增加了增强的解剖学信息。为了克服以前导航系统的缺点,我们提出使用一种称为视觉同时定位和映射 (SLAM) 的技术来改进导航系统中的支气管镜跟踪。
我们提出了一种改进的视觉 SLAM 算法,并将其用于估计特定于支气管镜的视频作为输入。我们通过在特征匹配中添加更多狭窄的标准来改进跟踪过程,以避免不匹配。为了验证,我们通过探索合成橡胶支气管模型来收集几轮支气管镜视频。我们通过向模型添加周期性力来模拟呼吸来使模型变形。我们将视觉 SLAM 生成的相机位置与手动创建的相机姿态的真实位置进行了比较。我们还比较了原始 SLAM 和提出的方法之间成功跟踪的帧数。
我们以 80 毫秒/帧的速度成功跟踪了 29559 帧,这相当于所有采集帧的 78.1%。我们的技术的平均均方根误差为 3.02 毫米,而原始技术的为 3.61 毫米。
我们提出了一种使用视觉 SLAM 进行支气管镜跟踪的新方法。我们的实验结果表明,使用视觉 SLAM 来估计支气管镜导航过程中的支气管镜相机位置是可行的。与原始技术相比,我们提出的方法跟踪了更多的帧,并且具有更高的准确性。未来的工作将包括将跟踪结果与虚拟支气管镜结合,并进行体内案例验证。