OnkoZentrum Zürich, Zurich, Switzerland.
Clinic for Clinical Pharmacology and Toxicology, University Hospital Zurich, Zurich, Switzerland.
J Med Internet Res. 2021 Aug 5;23(8):e29271. doi: 10.2196/29271.
Electronic patient-reported outcomes (ePRO) are a relatively novel form of data and have the potential to improve clinical practice for cancer patients. In this prospective, multicenter, observational clinical trial, efforts were made to demonstrate the reliability of patient-reported symptoms.
The primary objective of this study was to assess the level of agreement κ between symptom ratings by physicians and patients via a shared review process in order to determine the future reliability and utility of self-reported electronic symptom monitoring.
Patients receiving systemic therapy in a (neo-)adjuvant or noncurative intention setting captured ePRO for 52 symptoms over an observational period of 90 days. At 3-week intervals, randomly selected symptoms were reviewed between the patient and physician for congruency on severity of the grading of adverse events according to the Common Terminology Criteria of Adverse Events (CTCAE). The patient-physician agreement for the symptom review was assessed via Cohen kappa (κ), through which the interrater reliability was calculated. Chi-square tests were used to determine whether the patient-reported outcome was different among symptoms, types of cancer, demographics, and physicians' experience.
Among the 181 patients (158 women and 23 men; median age 54.4 years), there was a fair scoring agreement (κ=0.24; 95% CI 0.16-0.33) for symptoms that were entered 2 to 4 weeks before the intended review (first rating) and a moderate agreement (κ=0.41; 95% CI 0.34-0.48) for symptoms that were entered within 1 week of the intended review (second rating). However, the level of agreement increased from moderate (first rating, κ=0.43) to substantial (second rating, κ=0.68) for common symptoms of pain, fever, diarrhea, obstipation, nausea, vomiting, and stomatitis. Similar congruency levels of ratings were found for the most frequently entered symptoms (first rating: κ=0.42; second rating: κ=0.65). The symptom with the lowest agreement was hair loss (κ=-0.05). With regard to the latency of symptom entry into the review, hardly any difference was demonstrated between symptoms that were entered from days 1 to 3 and from days 4 to 7 before the intended review (κ=0.40 vs κ=0.39, respectively). In contrast, for symptoms that were entered 15 to 21 days before the intended review, no congruency was demonstrated (κ=-0.15). Congruency levels seemed to be unrelated to the type of cancer, demographics, and physicians' review experience.
The shared monitoring and review of symptoms between patients and clinicians has the potential to improve the understanding of patient self-reporting. Our data indicate that the integration of ePRO into oncological clinical research and continuous clinical practice provides reliable information for self-empowerment and the timely intervention of symptoms.
ClinicalTrials.gov NCT03578731; https://clinicaltrials.gov/ct2/show/NCT03578731.
电子患者报告结局(ePRO)是一种相对较新的数据形式,有可能改善癌症患者的临床实践。在这项前瞻性、多中心、观察性临床试验中,努力证明患者报告症状的可靠性。
本研究的主要目的是评估通过共享审查过程医生和患者之间症状评分的一致性κ,以确定未来自我报告电子症状监测的可靠性和实用性。
接受(新)辅助或非治愈性治疗的患者在 90 天的观察期内,通过电子方式报告 52 种症状的 ePRO。每 3 周,根据不良事件通用术语标准(CTCAE),随机选择症状,由患者和医生共同审查严重程度的一致性。通过 Cohen kappa(κ)评估症状审查的患者-医生一致性,通过该一致性计算评分者间可靠性。卡方检验用于确定症状、癌症类型、人口统计学特征和医生经验是否存在不同的患者报告结局。
在 181 名患者(158 名女性和 23 名男性;中位年龄 54.4 岁)中,对于在预期审查前 2 至 4 周(第一次评分)输入的症状,评分具有公平的一致性(κ=0.24;95%CI 0.16-0.33),对于在预期审查前 1 周内输入的症状,评分具有中度一致性(κ=0.41;95%CI 0.34-0.48)。然而,对于疼痛、发热、腹泻、便秘、恶心、呕吐和口腔炎等常见症状,从中度一致性(第一次评分,κ=0.43)增加到高度一致性(第二次评分,κ=0.68)。对于最常输入的症状,也发现了相似的评分一致性水平(第一次评分:κ=0.42;第二次评分:κ=0.65)。头发脱落的一致性最低(κ=-0.05)。对于症状进入审查的潜伏期,在预期审查前 1 至 3 天和 4 至 7 天输入的症状之间几乎没有差异(κ=0.40 与 κ=0.39,分别)。相比之下,在预期审查前 15 至 21 天输入的症状没有一致性(κ=-0.15)。一致性水平似乎与癌症类型、人口统计学特征和医生的审查经验无关。
患者和临床医生之间共同监测和审查症状有可能改善对患者自我报告的理解。我们的数据表明,将 ePRO 纳入肿瘤学临床研究和持续临床实践中,为自我赋权和及时干预症状提供了可靠的信息。
ClinicalTrials.gov NCT03578731;https://clinicaltrials.gov/ct2/show/NCT03578731。