Billinghurst M, Savage J, Oppenheimer P, Edmond C
Human Interface Technology Laboratory, University of Washington, Seattle 98195, USA.
Stud Health Technol Inform. 1996;29:590-607.
Virtual Reality has made computer interfaces more intuitive but not more intelligent. This paper shows how an expert system can be coupled with multimodal input in a virtual environment to provide an intelligent simulation tool or surgical assistant. This is accomplished in three steps. First, voice and gestural input is interpreted and represented in a common semantic form. Second, a rule-based expert system is used to infer context and user actions from this semantic representation. Finally, the inferred user actions are matched against steps in a surgical procedure to monitor the user's progress and provide automatic feedback. In addition, the system can respond immediately to multimodal commands for navigational assistance and/or identification of critical anatomical structures. To show how these methods are used we present a prototype sinus surgery interface. The approach described here may easily be extended to a wide variety of medical and non-medical training applications by making simple changes to the expert system database and virtual environment models. Successful implementation of an expert system in both simulated and real surgery has enormous potential for the surgeon both in training and clinical practice.
虚拟现实技术使计算机界面更具直观性,但并未使其更智能。本文展示了如何在虚拟环境中将专家系统与多模态输入相结合,以提供智能模拟工具或手术助手。这一过程通过三个步骤完成。首先,语音和手势输入被解释并以通用语义形式表示。其次,基于规则的专家系统用于从这种语义表示中推断上下文和用户动作。最后,将推断出的用户动作与手术过程中的步骤进行匹配,以监测用户的进展并提供自动反馈。此外,该系统可以立即响应多模态命令,以提供导航辅助和/或识别关键解剖结构。为了展示这些方法的使用方式,我们展示了一个鼻窦手术界面原型。通过对专家系统数据库和虚拟环境模型进行简单更改,这里描述的方法可以轻松扩展到各种医学和非医学培训应用中。在模拟手术和实际手术中成功实施专家系统,对外科医生的培训和临床实践都具有巨大潜力。