Collins Jack C, Chan Ming Yeung, Schneider Carl R, Yan Lam R, Moles Rebekah J
Sydney Pharmacy School, Faculty of Medicine and Health, The University of Sydney, Australia.
Sydney Pharmacy School, Faculty of Medicine and Health, The University of Sydney, Australia; School of Pharmacy, University of Nottingham, United Kingdom.
Res Social Adm Pharm. 2021 Jun;17(6):1198-1203. doi: 10.1016/j.sapharm.2020.09.006. Epub 2020 Sep 11.
The use of simulated patients (SPs) in pharmacy practice research has become an established method to observe practice. The reliability of data reported using this method in comparison to pharmacy staff self-reported behaviour has yet to be ascertained.
To compare the inter-rater agreement of pharmacy staff and SP-reported data to researcher-reported data from audio recordings of SP encounters.
A dataset of 352 audio-recorded SP encounters was generated in March-October 2015 by 61 undergraduate pharmacy students completing SP visits to 36 community pharmacies in Sydney, Australia. Post-visit scores were recorded on data collection forms by SPs. Staff completed self-assessments on identical forms immediately after the encounter. Two-hundred-and-seventy visits were randomly selected as the sample for this study, where the researcher independently scored encounters via audio recordings. Inter-rater agreement was calculated through intra-class correlation (ICC) and weighted kappa analyses.
Analysis of staff scores returned ICC values of 0.48 (95% CI:0.38-0.56; p < 0.001) for information gathering and 0.63 (95% CI:0.55-0.70; p < 0.001) for total score. Weighted kappa for information rating was 0.30 (95% CI:0.21-0.38; p < 0.001) and 0.43 (95% CI:0.34-0.51; p < 0.001) for overall outcome. ICC values for SPs were 0.91 (95% CI:0.88-0.93; p < 0.001) and 0.90 (95% CI:0.87-0.92; p < 0.001) for information gathering and total scores respectively. Weighted kappa values were 0.44 (95% CI:0.37-0.52; p < 0.001) for information rating and 0.63 (95% CI:0.55-0.70; p < 0.001) for overall outcome.
Pharmacy staff self-reported their behaviour with a poor degree of reliability. Conversely, SPs had a high level of agreement with the researcher scoring from audio recordings. Disagreement for both groups of raters was most apparent in rating the information provided and overall appropriateness of outcome. Future research should investigate this discrepancy between staff-reported behaviour and actual behaviour and consider the implications of this discrepancy in the interpretation of self-reported data.
在药学实践研究中使用模拟患者(SPs)已成为一种既定的观察实践的方法。与药学工作人员自我报告的行为相比,使用这种方法报告的数据的可靠性尚未得到确定。
比较药学工作人员和模拟患者报告的数据与研究人员根据模拟患者会诊录音报告的数据之间的评分者间一致性。
2015年3月至10月,61名本科药学专业学生对澳大利亚悉尼的36家社区药房进行模拟患者访视,生成了一个包含352次模拟患者会诊录音的数据集。模拟患者在数据收集表上记录访视后评分。工作人员在会诊结束后立即在相同的表格上完成自我评估。随机选择270次访视作为本研究的样本,研究人员通过录音对会诊进行独立评分。通过组内相关系数(ICC)和加权kappa分析计算评分者间一致性。
对工作人员评分的分析得出,信息收集的ICC值为0.48(95%CI:0.38 - 0.56;p < 0.001),总分的ICC值为0.63(95%CI:0.55 - 0.70;p < 0.001)。信息评分的加权kappa值为0.30(95%CI:0.21 - 0.38;p < 0.001),总体结果的加权kappa值为0.43(95%CI:0.34 - 0.51;p < 0.001)。模拟患者的信息收集和总分的ICC值分别为0.91(95%CI:0.88 - 0.93;p < 0.001)和0.90(95%CI:0.87 - 0.92;p < 0.001)。信息评分的加权kappa值为0.44(95%CI:0.37 - 0.52;p < 0.001),总体结果的加权kappa值为0.63(95%CI:0.55 - 0.70;p < 0.001)。
药学工作人员自我报告其行为的可靠性较差。相反,模拟患者与研究人员根据录音评分的一致性较高。两组评分者在对提供的信息和结果的总体适当性进行评分时分歧最为明显。未来的研究应调查工作人员报告的行为与实际行为之间的这种差异,并考虑这种差异在自我报告数据解释中的影响。