Evans Center for Implementation and Improvement Sciences, Boston University School of Medicine, 88 East Newton Street, Vose 216, Boston, MA, 02118, USA.
Department of Health Law, Policy & Management, Boston University School of Public Health, Boston, MA, USA.
Implement Sci. 2018 May 29;13(1):71. doi: 10.1186/s13012-018-0770-5.
The fields of implementation and improvement sciences have experienced rapid growth in recent years. However, research that seeks to inform health care change may have difficulty translating core components of implementation and improvement sciences within the traditional paradigms used to evaluate efficacy and effectiveness research. A review of implementation and improvement sciences grant proposals within an academic medical center using a traditional National Institutes of Health framework highlighted the need for tools that could assist investigators and reviewers in describing and evaluating proposed implementation and improvement sciences research.
We operationalized existing recommendations for writing implementation science proposals as the ImplemeNtation and Improvement Science Proposals Evaluation CriTeria (INSPECT) scoring system. The resulting system was applied to pilot grants submitted to a call for implementation and improvement science proposals at an academic medical center. We evaluated the reliability of the INSPECT system using Krippendorff's alpha coefficients and explored the utility of the INSPECT system to characterize common deficiencies in implementation research proposals.
We scored 30 research proposals using the INSPECT system. Proposals received a median cumulative score of 7 out of a possible score of 30. Across individual elements of INSPECT, proposals scored highest for criteria rating evidence of a care or quality gap. Proposals generally performed poorly on all other criteria. Most proposals received scores of 0 for criteria identifying an evidence-based practice or treatment (50%), conceptual model and theoretical justification (70%), setting's readiness to adopt new services/treatment/programs (54%), implementation strategy/process (67%), and measurement and analysis (70%). Inter-coder reliability testing showed excellent reliability (Krippendorff's alpha coefficient 0.88) for the application of the scoring system overall and demonstrated reliability scores ranging from 0.77 to 0.99 for individual elements.
The INSPECT scoring system presents a new scoring criteria with a high degree of inter-rater reliability and utility for evaluating the quality of implementation and improvement sciences grant proposals.
实施和改进科学领域近年来发展迅速。然而,旨在为医疗保健变革提供信息的研究可能难以在用于评估疗效和有效性研究的传统范式内转化实施和改进科学的核心内容。对一家学术医疗中心内的实施和改进科学资助提案进行的审查强调了需要能够协助研究者和评审者描述和评估拟议的实施和改进科学研究的工具。
我们将现有的实施科学提案撰写建议具体化为实施和改进科学提案评估标准(INSPECT)评分系统。该系统应用于学术医疗中心实施和改进科学提案征集活动中的试点资助。我们使用 Krippendorff 的 alpha 系数评估了 INSPECT 系统的可靠性,并探讨了 INSPECT 系统用于描述实施研究提案常见缺陷的效用。
我们使用 INSPECT 系统对 30 个研究提案进行了评分。提案的累积得分为 7 分(满分 30 分)。在 INSPECT 的各个要素中,提案在评定证据表明存在护理或质量差距的标准上得分最高。而在其他所有标准上,提案的表现普遍较差。大多数提案在识别基于证据的实践或治疗(50%)、概念模型和理论依据(70%)、环境准备采用新服务/治疗/方案(54%)、实施策略/过程(67%)和测量与分析(70%)等标准上均获得 0 分。内部一致性测试表明,该评分系统在总体上具有很高的可靠性(Krippendorff 的 alpha 系数为 0.88),并且单个要素的可靠性得分范围为 0.77 至 0.99。
INSPECT 评分系统为评估实施和改进科学资助提案的质量提供了一种新的评分标准,具有很高的评分者间可靠性和实用性。