Silverstein Jonathan C, Dech Fred, Kouchoukos Philip L
Department of Surgery, The University of Chicago Hospitals Room A-105, MC 6051, 5841 S. Maryland Avenue Chicago, Illinois 60637-1470, USA.
Stud Health Technol Inform. 2004;98:347-52.
Radiological volumes are typically reviewed by surgeons using cross-sections and iso-surface reconstructions. Applications that combine collaborative stereo volume visualization with symbolic anatomic information and data fusions would expand surgeons' capabilities in interpretation of data and in planning treatment. Such an application has not been seen clinically. We are developing methods to systematically combine symbolic anatomy (term hierarchies and iso-surface atlases) with patient data using data fusion. We describe our progress toward integrating these methods into our collaborative virtual reality application. The fully combined application will be a feature-rich stereo collaborative volume visualization environment for use by surgeons in which DICOM datasets will self-report underlying anatomy with visual feedback. Using hierarchical navigation of SNOMED-CT anatomic terms integrated with our existing Tele-immersive DICOM-based volumetric rendering application, we will display polygonal representations of anatomic systems on the fly from menus that query a database. The methods and tools involved in this application development are SNOMED-CT, DICOM, VISIBLE HUMAN, volumetric fusion and C++ on a Tele-immersive platform. This application will allow us to identify structures and display polygonal representations from atlas data overlaid with the volume rendering. First, atlas data is automatically translated, rotated, and scaled to the patient data during loading using a public domain volumetric fusion algorithm. This generates a modified symbolic representation of the underlying canonical anatomy. Then, through the use of collision detection or intersection testing of various transparent polygonal representations, the polygonal structures are highlighted into the volumetric representation while the SNOMED names are displayed. Thus, structural names and polygonal models are associated with the visualized DICOM data. This novel juxtaposition of information promises to expand surgeons' abilities to interpret images and plan treatment.
放射学容积通常由外科医生使用横截面和等值面重建进行查看。将协作式立体容积可视化与符号化解剖信息及数据融合相结合的应用,将扩展外科医生解读数据和制定治疗方案的能力。目前临床上尚未见到此类应用。我们正在开发一些方法,通过数据融合将符号化解剖结构(术语层次结构和等值面图谱)与患者数据系统地结合起来。我们描述了将这些方法集成到我们的协作虚拟现实应用中的进展情况。完全整合后的应用将是一个功能丰富的立体协作容积可视化环境,供外科医生使用,在这个环境中,DICOM数据集将通过视觉反馈自动报告其基础解剖结构。利用与我们现有的基于远程沉浸式DICOM的容积渲染应用集成的SNOMED-CT解剖学术语的分层导航,我们将从查询数据库的菜单中即时显示解剖系统的多边形表示。此应用开发中涉及的方法和工具包括SNOMED-CT、DICOM、可视人数据集、容积融合以及远程沉浸式平台上的C++。该应用将使我们能够识别结构,并显示叠加在容积渲染上的图谱数据的多边形表示。首先,在加载过程中,使用一个公共领域的容积融合算法将图谱数据自动平移、旋转和缩放至患者数据。这会生成基础标准解剖结构的修改后的符号化表示。然后,通过对各种透明多边形表示进行碰撞检测或相交测试,将多边形结构突出显示到容积表示中,同时显示SNOMED名称。因此,结构名称和多边形模型与可视化的DICOM数据相关联。这种新颖的信息并列有望扩展外科医生解读图像和制定治疗方案的能力。