Trippas Dries, Kellen David, Singmann Henrik, Pennycook Gordon, Koehler Derek J, Fugelsang Jonathan A, Dubé Chad
Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany.
Syracuse University, Syracuse, NY, USA.
Psychon Bull Rev. 2018 Dec;25(6):2141-2174. doi: 10.3758/s13423-018-1460-7.
The belief-bias effect is one of the most-studied biases in reasoning. A recent study of the phenomenon using the signal detection theory (SDT) model called into question all theoretical accounts of belief bias by demonstrating that belief-based differences in the ability to discriminate between valid and invalid syllogisms may be an artifact stemming from the use of inappropriate linear measurement models such as analysis of variance (Dube et al., Psychological Review, 117(3), 831-863, 2010). The discrepancy between Dube et al.'s, Psychological Review, 117(3), 831-863 (2010) results and the previous three decades of work, together with former's methodological criticisms suggests the need to revisit earlier results, this time collecting confidence-rating responses. Using a hierarchical Bayesian meta-analysis, we reanalyzed a corpus of 22 confidence-rating studies (N = 993). The results indicated that extensive replications using confidence-rating data are unnecessary as the observed receiver operating characteristic functions are not systematically asymmetric. These results were subsequently corroborated by a novel experimental design based on SDT's generalized area theorem. Although the meta-analysis confirms that believability does not influence discriminability unconditionally, it also confirmed previous results that factors such as individual differences mediate the effect. The main point is that data from previous and future studies can be safely analyzed using appropriate hierarchical methods that do not require confidence ratings. More generally, our results set a new standard for analyzing data and evaluating theories in reasoning. Important methodological and theoretical considerations for future work on belief bias and related domains are discussed.
信念偏差效应是推理研究中被研究最多的偏差之一。最近一项使用信号检测理论(SDT)模型对该现象进行的研究对所有信念偏差的理论解释提出了质疑,该研究表明,在区分有效和无效三段论能力上基于信念的差异可能是由于使用了不适当的线性测量模型(如方差分析)而产生的人为结果(杜贝等人,《心理学评论》,第117卷第3期,第831 - 863页,2010年)。杜贝等人(《心理学评论》,第117卷第3期,第831 - 863页,2010年)的研究结果与此前三十年的研究工作之间存在差异,再加上前者的方法学批评,这表明有必要重新审视早期的研究结果,此次需收集置信度评级反应。我们使用分层贝叶斯元分析,重新分析了一个包含22项置信度评级研究(N = 993)的语料库。结果表明,由于观察到的接收者操作特征函数并非系统不对称,因此无需使用置信度评级数据进行大量重复研究。随后,基于SDT的广义面积定理的新颖实验设计证实了这些结果。尽管元分析证实可信度并不会无条件地影响可辨别性,但它也证实了先前的研究结果,即个体差异等因素会调节这种效应。关键在于,以往和未来研究的数据可以使用不需要置信度评级的适当分层方法进行安全分析。更广泛地说,我们的研究结果为推理研究中的数据分析和理论评估设定了新的标准。我们还讨论了未来信念偏差及相关领域研究工作的重要方法学和理论考量。