Suppr超能文献

一种在肠道及口腔颌面外科远程临场中使用混合现实的新解决方案:3D均值克隆算法。

A Novel Solution of Using Mixed Reality in Bowel and Oral and Maxillofacial Surgical Telepresence: 3D Mean Value Cloning algorithm.

作者信息

Maharjan Arjina, Alsadoon Abeer, Prasad P W C, AlSallami Nada, Rashid Tarik A, Alrubaie Ahmad, Haddad Sami

机构信息

School of Computing and Mathematics, Charles Sturt University, Sydney Campus, Australia.

Computer Science Department, Worcester State University, MA, USA.

出版信息

Int J Med Robot. 2020 Sep 4:e2161. doi: 10.1002/rcs.2161.

Abstract

BACKGROUND AND AIM

Most of the Mixed Reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts.

METHODOLOGY

The proposed system enhanced the mean value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the 3D mean value coordinates and improvised mean value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discoloration artefacts around the blending region RESULTS: As compared to the state of art solution, the accuracy in terms of overlay error of the proposed solution is improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 seconds from 0.211 seconds CONCLUSION: Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video. This article is protected by copyright. All rights reserved.

摘要

背景与目的

手术远程呈现中使用的大多数混合现实模型,由于视频帧中的光照变化,在边界区域存在差异以及时空不一致的问题。这项工作的目的是提出一种新的解决方案,通过合并手术部位的增强视频和远程专家外科医生的虚拟手来生成合成视频。所提出解决方案的目的是减少处理时间,并通过减少叠加和可视化误差以及消除遮挡和伪像来提高合并视频的准确性。

方法

所提出的系统增强了均值克隆算法,该算法有助于保持最终合成视频的时空一致性。增强后的算法在图像克隆过程中包括三维均值坐标和改进的均值插值,这有助于减少融合区域周围的锯齿、模糊和变色伪像。

结果

与现有技术解决方案相比,所提出解决方案的叠加误差精度从1.01毫米提高到0.80毫米,而可视化误差精度从98.8%提高到99.4%。处理时间从0.211秒减少到0.173秒。

结论

我们的解决方案通过添加有助于在最终合并视频中保持空间一致性的空间距离,使感兴趣的对象与目标图像的光强度保持一致。本文受版权保护。保留所有权利。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验