Crupi Vincenzo, Nelson Jonathan D, Meder Björn, Cevolani Gustavo, Tentori Katya
Center for Logic, Language, and Cognition, Department of Philosophy and Education, University of Turin.
School of Psychology, University of Surrey.
Cogn Sci. 2018 Jun 17. doi: 10.1111/cogs.12613.
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism.
在许多情况下,搜索信息至关重要。例如,在医学领域,仔细选择诊断测试有助于缩小患者可能患有的合理疾病范围。在概率框架中,测试选择通常通过假设人们的目标是减少关于世界可能状态的不确定性来建模。在认知科学、心理学和医学决策中,香农熵是形式化概率不确定性及其减少的最突出和最广泛使用的模型。然而,各种替代熵度量(哈特利熵、二次熵、Tsallis熵、雷尼熵等等)在社会科学、自然科学、计算机科学和科学哲学中很受欢迎。特定的熵度量在特定的研究领域中占主导地位,这些差异是源于不同的理论和实际目标还是仅仅由于历史偶然往往是一个悬而未决的问题。跨越学科界限,我们表明,几种熵和熵减少度量在一个统一的形式体系——夏尔马 - 米塔尔框架中作为特殊情况出现。利用数学结果、计算机模拟以及对已发表行为数据的分析,我们讨论四个关键问题:各种熵模型如何相互关联?在统一框架内考虑不同的熵模型能获得哪些见解?不同熵模型的心理合理性如何?对人类信息获取研究有哪些新问题和见解?我们的工作为理论和实证研究提供了几条新途径,在一个全面统一的信息理论形式体系中协调了明显相互冲突的方法和实证发现。