Bova Carol, Jaffarian Carol, Crawford Sybil, Quintos Jose Bernardo, Lee Mary, Sullivan-Bolyai Susan
Carol Bova, PhD, RN, is Professor and Carol Jaffarian, MS, RN, is Instructor, Graduate School of Nursing, University of Massachusetts Medical School, Worcester. Sybil Crawford, PhD, is Professor, Division of Preventive and Behavioral Medicine, Department of Medicine, University of Massachusetts Medical School, Worcester. Jose Bernardo Quintos, MD, is Division Chief, Pediatric Endocrinology, Hasbro Children's Hospital, Providence, Rhode Island. Mary Lee, MD, is Professor, Department of Pediatrics, University of Massachusetts Medical School, Worcester. Susan Sullivan-Bolyai, DNSc, CNS, RN, FAAN, is Associate Professor, College of Nursing, New York University.
Nurs Res. 2017 Jan/Feb;66(1):54-59. doi: 10.1097/NNR.0000000000000194.
Measurement of intervention fidelity is an essential component of any scientifically sound intervention trial. However, few papers have proposed ways to integrate intervention fidelity data into the execution of these trials.
The purpose of this article is to describe the intervention fidelity process used in a randomized controlled trial of a human patient simulator intervention and how these data were used to monitor drift and provide feedback to improve the consistency of both intervention and control delivery over time in a multisite education intervention for parents of children with newly diagnosed Type 1 diabetes.
Intervention fidelity was measured for both the intervention and control condition by direct observation, self-report of interventionist delivery, and parent participant receipt of educational information. Intervention fidelity data were analyzed after 50%, 75%, and 100% of the participants had been recruited and compared by group (treatment and control) and research site.
The sample included 191 parents of young children newly diagnosed with Type 1 diabetes. Observations scores in both intervention and control groups indicated a high level of intervention fidelity. Treatment receipt was also high and did not differ by treatment group. The teaching session attendance rates by site and session were significantly different at Time Point 1 (50% enrollment); following study staff retraining and reinforcement, there were no significant differences at Time Point 3 (100% enrollment).
Results demonstrate the importance of monitoring intervention fidelity in both the intervention and control condition over time and using these data to correct drift during the course of a multisite clinical trial.
干预保真度的测量是任何科学合理的干预试验的重要组成部分。然而,很少有论文提出将干预保真度数据纳入这些试验执行过程的方法。
本文旨在描述在一项针对人类患者模拟器干预的随机对照试验中所使用的干预保真度过程,以及在一项针对新诊断为1型糖尿病儿童的父母的多地点教育干预中,这些数据是如何用于监测偏差并提供反馈以提高干预和对照实施的一致性。
通过直接观察、干预实施者的自我报告以及家长参与者对教育信息的接收情况,对干预组和对照组的干预保真度进行测量。在招募了50%、75%和100%的参与者后,对干预保真度数据进行分析,并按组(治疗组和对照组)和研究地点进行比较。
样本包括191名新诊断为1型糖尿病的幼儿的父母。干预组和对照组的观察得分均表明干预保真度较高。治疗接受率也很高,且在治疗组之间没有差异。在时间点1(50%入组)时,各地点和各场次的教学课程出席率存在显著差异;在研究人员重新培训和强化之后,在时间点3(100%入组)时没有显著差异。
结果表明在多地点临床试验过程中,随着时间推移监测干预组和对照组的干预保真度并利用这些数据纠正偏差的重要性。