Gittelman Michael A, Kincaid Madeline, Denny Sarah, Wervey Arnold Melissa, FitzGerald Michael, Carle Adam C, Mara Constance A
From the Division of Emergency Medicine (M.A.G., M.F.), Cincinnati Children's Hospital, Cincinnati, Ohio; Brown School at Washington University in St. Louis (M.K.), St. Louis, Missouri; Division of Emergency Medicine (S.D.), Nationwide Children's Hospital, Columbus, Ohio; American Academy of Pediatrics, Ohio Chapter (M.W.), Worthington, Ohio; and James M. Anderson Center for Health Systems Excellence (A.C.C., C.A.M.), Cincinnati Children's Hospital, Cincinnati, Ohio.
J Trauma Acute Care Surg. 2016 Oct;81(4 Suppl 1):S8-S13. doi: 10.1097/TA.0000000000001182.
A standardized injury prevention (IP) screening tool can identify family risks and allow pediatricians to address behaviors. To assess behavior changes on later screens, the tool must be reliable for an individual and ideally between household members. Little research has examined the reliability of safety screening tool questions. This study utilized test-retest reliability of parent responses on an existing IP questionnaire and also compared responses between household parents.
Investigators recruited parents of children 0 to 1 year of age during admission to a tertiary care children's hospital. When both parents were present, one was chosen as the "primary" respondent. Primary respondents completed the 30-question IP screening tool after consent, and they were re-screened approximately 4 hours later to test individual reliability. The "second" parent, when present, only completed the tool once. All participants received a 10-dollar gift card. Cohen's Kappa was used to estimate test-retest reliability and inter-rater agreement. Standard test-retest criteria consider Kappa values: 0.0 to 0.40 poor to fair, 0.41 to 0.60 moderate, 0.61 to 0.80 substantial, and 0.81 to 1.00 as almost perfect reliability.
One hundred five families participated, with five lost to follow-up. Thirty-two (30.5%) parent dyads completed the tool. Primary respondents were generally mothers (88%) and Caucasian (72%). Test-retest of the primary respondents showed their responses to be almost perfect; average 0.82 (SD = 0.13, range 0.49-1.00). Seventeen questions had almost perfect test-retest reliability and 11 had substantial reliability. However, inter-rater agreement between household members for 12 objective questions showed little agreement between responses; inter-rater agreement averaged 0.35 (SD = 0.34, range -0.19-1.00). One question had almost perfect inter-rater agreement and two had substantial inter-rater agreement.
The IP screening tool used by a single individual had excellent test-retest reliability for nearly all questions. However, when a reporter changes from pre- to postintervention, differences may reflect poor reliability or different subjective experiences rather than true change.
一种标准化的伤害预防(IP)筛查工具能够识别家庭风险,并使儿科医生能够处理相关行为。为了评估后续筛查中的行为变化,该工具对于个体而言必须可靠,理想情况下在家庭成员之间也应可靠。很少有研究考察安全筛查工具问题的可靠性。本研究利用了现有IP问卷中家长回答的重测信度,并比较了家庭中父母之间的回答。
研究人员在一家三级护理儿童医院收治0至1岁儿童时招募其家长。当父母双方都在场时,选择一方作为“主要”应答者。主要应答者在获得同意后完成了包含30个问题的IP筛查工具,大约4小时后再次接受筛查以测试个体信度。“第二位”家长(若在场)仅完成该工具一次。所有参与者都获得了一张10美元的礼品卡。使用科恩kappa系数来估计重测信度和评分者间一致性。标准的重测标准考虑kappa值:0.0至0.40为差到一般,0.41至0.60为中等,0.61至0.80为高度一致,0.81至1.00为几乎完美的信度。
105个家庭参与,5个失访。32对(30.5%)父母完成了该工具。主要应答者通常为母亲(88%)且是白种人(72%)。主要应答者的重测结果显示他们的回答几乎完美;平均为0.82(标准差 = 0.13,范围0.49 - 1.00)。17个问题具有几乎完美的重测信度,11个具有高度一致的信度。然而,12个客观问题在家庭成员之间的评分者间一致性显示回答之间几乎没有一致性;评分者间一致性平均为0.35(标准差 = 0.34,范围 -0.19至1.00)。一个问题具有几乎完美的评分者间一致性,两个问题具有高度一致的评分者间一致性。
单个个体使用的IP筛查工具对几乎所有问题都具有出色的重测信度。然而,当报告者从干预前变为干预后时,差异可能反映出信度不佳或不同的主观体验,而非真实变化。