Khodambashi Soudabeh, Nytrø Øystein
Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway.
Stud Health Technol Inform. 2017;239:48-54.
To facilitate the clinical guideline (GL) development process, different groups of researchers have proposed tools that enable computer-supported tools for authoring and publishing GLs. In a previous study we interviewed GL authors in different Norwegian institutions and identified tool shortcomings. In this follow-up study our goal is to explore to what extent GL authoring tools have been evaluated by researchers, guideline organisations, or GL authors. This article presents results from a systematic literature review of evaluation (including usability) of GL authoring tools. A controlled database search and backward snow-balling were used to identify relevant articles. From the 12692 abstracts found, 188 papers were fully reviewed and 26 papers were identified as relevant. The GRADEPro tool has attracted some evaluation, however popular tools and platforms such as DECIDE, Doctor Evidence, JBI-SUMARI, G-I-N library have not been subject to specific evaluation from an authoring perspective. Therefore, we found that little attention was paid to the evaluation of the tools in general. We could not find any evaluation relevant to how tools integrate and support the complex GL development workflow. The results of this paper are highly relevant to GL authors, tool developers and GL publishing organisations in order to improve and control the GL development and maintenance process.
为推动临床指南(GL)制定流程,不同研究团队提出了一些工具,这些工具可实现用于撰写和发布GL的计算机支持。在之前的一项研究中,我们采访了挪威不同机构的GL作者,并找出了工具的不足之处。在这项后续研究中,我们的目标是探究GL撰写工具在多大程度上得到了研究人员、指南组织或GL作者的评估。本文展示了对GL撰写工具评估(包括可用性)的系统文献综述结果。通过受控数据库搜索和反向滚雪球法来识别相关文章。从找到的12692篇摘要中,对188篇论文进行了全面审查,确定了26篇相关论文。GRADEPro工具受到了一些评估,然而,诸如DECIDE、Doctor Evidence、JBI - SUMARI、G - I - N库等流行工具和平台尚未从撰写角度受到具体评估。因此,我们发现总体上对工具评估的关注较少。我们未找到任何与工具如何整合及支持复杂的GL制定工作流程相关的评估。本文结果与GL作者、工具开发者和GL出版组织高度相关,有助于改进和控制GL的制定及维护流程。