Lewis Cara C, Fischer Sarah, Weiner Bryan J, Stanick Cameo, Kim Mimi, Martinez Ruben G
Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St., Bloomington, IN, 47405, USA.
Department of Psychiatry and Behavioral Sciences, University of Washington, School of Medicine, Harborview Medical Center, Box 359911, 325 9th Ave, Seattle, WA, 98104, USA.
Implement Sci. 2015 Nov 4;10:155. doi: 10.1186/s13012-015-0342-x.
High-quality measurement is critical to advancing knowledge in any field. New fields, such as implementation science, are often beset with measurement gaps and poor quality instruments, a weakness that can be more easily addressed in light of systematic review findings. Although several reviews of quantitative instruments used in implementation science have been published, no studies have focused on instruments that measure implementation outcomes. Proctor and colleagues established a core set of implementation outcomes including: acceptability, adoption, appropriateness, cost, feasibility, fidelity, penetration, sustainability (Adm Policy Ment Health Ment Health Serv Res 36:24-34, 2009). The Society for Implementation Research Collaboration (SIRC) Instrument Review Project employed an enhanced systematic review methodology (Implement Sci 2: 2015) to identify quantitative instruments of implementation outcomes relevant to mental or behavioral health settings.
Full details of the enhanced systematic review methodology are available (Implement Sci 2: 2015). To increase the feasibility of the review, and consistent with the scope of SIRC, only instruments that were applicable to mental or behavioral health were included. The review, synthesis, and evaluation included the following: (1) a search protocol for the literature review of constructs; (2) the literature review of instruments using Web of Science and PsycINFO; and (3) data extraction and instrument quality ratings to inform knowledge synthesis. Our evidence-based assessment rating criteria quantified fundamental psychometric properties as well as a crude measure of usability. Two independent raters applied the evidence-based assessment rating criteria to each instrument to generate a quality profile.
We identified 104 instruments across eight constructs, with nearly half (n = 50) assessing acceptability and 19 identified for adoption, with all other implementation outcomes revealing fewer than 10 instruments. Only one instrument demonstrated at least minimal evidence for psychometric strength on all six of the evidence-based assessment criteria. The majority of instruments had no information regarding responsiveness or predictive validity.
Implementation outcomes instrumentation is underdeveloped with respect to both the sheer number of available instruments and the psychometric quality of existing instruments. Until psychometric strength is established, the field will struggle to identify which implementation strategies work best, for which organizations, and under what conditions.
高质量的测量对于推动任何领域的知识进步都至关重要。诸如实施科学等新领域常常面临测量差距和低质量工具的问题,鉴于系统评价的结果,这一弱点更容易得到解决。尽管已经发表了几篇关于实施科学中使用的定量工具的综述,但尚无研究聚焦于测量实施结果的工具。普罗克特及其同事确定了一套实施结果的核心指标,包括:可接受性、采用率、适宜性、成本、可行性、保真度、渗透率、可持续性(《行政政策与精神卫生及精神卫生服务研究》36:24 - 34, 2009)。实施研究协作协会(SIRC)工具审查项目采用了一种改进的系统评价方法(《实施科学》2: 2015)来识别与精神或行为健康环境相关的实施结果的定量工具。
改进的系统评价方法的完整细节可获取(《实施科学》2: 2015)。为提高审查的可行性,并与SIRC的范围一致,仅纳入适用于精神或行为健康的工具。审查、综合和评估包括以下内容:(1)构建体文献综述的检索方案;(2)使用科学网和心理学文摘数据库进行工具的文献综述;(3)数据提取和工具质量评级以促进知识综合。我们基于证据的评估评级标准量化了基本的心理测量属性以及可用性的粗略衡量指标。两名独立评估者将基于证据的评估评级标准应用于每个工具以生成质量概况。
我们在八个构建体中识别出104种工具,近一半(n = 50)评估可接受性,19种用于评估采用率,其他所有实施结果所涉及的工具均少于10种。只有一种工具在所有六项基于证据的评估标准上都至少有最低限度的心理测量强度证据。大多数工具没有关于反应性或预测效度的信息。
就可用工具的数量以及现有工具的心理测量质量而言,实施结果测量工具尚不完善。在确立心理测量强度之前,该领域将难以确定哪些实施策略对哪些组织在何种条件下效果最佳。