Müller Sebastijan, Bihlmaier Andreas, Irgenfried Stephan, Wörn Heinz
Institute for Anthropomatics and Robotics (IAR) - Intelligent Process Control and Robotics Lab (IPR), Karlsruhe Institute of Technology (KIT), Germany.
Stud Health Technol Inform. 2016;220:245-50.
In this paper we present a method for combining realtime and non-realtime (photorealistic) rendering with open source software. Realtime rendering provides sufficient realism and is a good choice for most simulation and regression testing purposes in robot-assisted surgery. However, for proper end-to-end testing of the system, some computer vision algorithms require high fidelity images that capture more minute details of the real scene. One of the central practical obstacles to combining both worlds in a uniform way is creating models that are suitable for both kinds of rendering paradigms. We build a modeling pipeline using open source tools that builds on established, open standards for data exchange. The result is demonstrated through a unified model of the medical OpenHELP phantom used in the Gazebo robotics simulator, which can at the same time be rendered with more visual fidelity in the Cycles raytracer.
在本文中,我们提出了一种使用开源软件将实时渲染与非实时(逼真)渲染相结合的方法。实时渲染提供了足够的真实感,是机器人辅助手术中大多数模拟和回归测试的良好选择。然而,对于系统的端到端测试,一些计算机视觉算法需要高保真图像来捕捉真实场景的更多细微细节。以统一方式结合这两种方式的一个核心实际障碍是创建适用于两种渲染范式的模型。我们使用开源工具构建了一个建模管道,该管道基于既定的开放数据交换标准。通过在Gazebo机器人模拟器中使用的医学OpenHELP模型的统一模型展示了结果,该模型同时可以在Cycles光线追踪器中以更高的视觉保真度进行渲染。