Center for Observational and Real-World Evidence, Merck & Co., Inc, Rahway, NJ, USA.
Child Health Evaluative Sciences, Hospital for Sick Children, Toronto, Canada.
Med Decis Making. 2023 Aug;43(6):667-679. doi: 10.1177/0272989X231171912. Epub 2023 May 18.
Discrete choice experiments (DCE) are increasingly being conducted using online panels. However, the comparability of such DCE-based preferences to traditional modes of data collection (e.g., in-person) is not well established. In this study, supervised, face-to-face DCE was compared with its unsupervised, online facsimile on face validity, respondent behavior, and modeled preferences.
Data from face-to-face and online EQ-5D-5L health state valuation studies were compared, in which each used the same experimental design and quota sampling procedure. Respondents completed 7 binary DCE tasks comparing 2 EQ-5D-5L health states presented side by side (health states A and B). Data face validity was assessed by comparing preference patterns as a function of the severity difference between 2 health states within a task. The prevalence of potentially suspicious choice patterns (i.e., all As, all Bs, and alternating As/Bs) was compared between studies. Preference data were modeled using multinomial logit regression and compared based on dimensional contribution to overall scale and importance ranking of dimension-levels.
One thousand five Online respondents and 1,099 face-to-face screened (F2F) respondents were included in the main comparison of DCE tasks. Online respondents reported more problems on all EQ-5D dimensions except for Mobility. The face validity of the data was similar between comparators. Online respondents had a greater prevalence of potentially suspicious DCE choice patterns ([Online]: 5.3% [F2F] 2.9%, = 0.005). When modeled, the relative contribution of each EQ-5D dimension differed between modes of administration. Online respondents weighed Mobility more importantly and Anxiety/Depression less importantly.
Although assessments of face validity were similar between Online and F2F, modeled preferences differed. Future analyses are needed to clarify whether differences are attributable to preference or data quality variation between modes of data collection.
离散选择实验(DCE)越来越多地通过在线面板进行。然而,基于 DCE 的偏好与传统数据收集方式(例如面对面)的可比性尚不清楚。在这项研究中,我们比较了监督的、面对面的 DCE 与非监督的、在线的 DCE 在表面有效性、受访者行为和建模偏好方面的差异。
比较了面对面和在线 EQ-5D-5L 健康状态估值研究的数据,这两项研究都使用了相同的实验设计和配额抽样程序。受访者完成了 7 项二元 DCE 任务,比较了并排呈现的 2 个 EQ-5D-5L 健康状态(状态 A 和 B)。通过比较任务内 2 个健康状态之间的严重程度差异,评估偏好模式的表面有效性。比较了研究之间潜在可疑选择模式(即所有 A、所有 B 和交替 A/B)的出现频率。使用多项逻辑回归对偏好数据进行建模,并根据对整体量表的维度贡献和维度级别重要性排名进行比较。
共有 1500 名在线受访者和 1099 名面对面筛选(F2F)受访者纳入了 DCE 任务的主要比较。在线受访者报告除了移动性之外,所有 EQ-5D 维度都存在更多问题。数据的表面有效性在比较器之间相似。在线受访者具有更高的潜在可疑 DCE 选择模式的流行率([在线]:5.3%[F2F]:2.9%,=0.005)。在建模时,每个 EQ-5D 维度的相对贡献在管理模式之间有所不同。在线受访者更看重移动性,而不太看重焦虑/抑郁。
尽管在线和 F2F 之间的表面有效性评估相似,但建模偏好不同。需要进一步分析以澄清差异是归因于偏好还是数据收集方式之间的数据质量差异。