Suppr超能文献

关于功能成像中假设检验的逻辑

On the logic of hypothesis testing in functional imaging.

作者信息

Turkheimer Federico E, Aston John A D, Cunningham Vincent J

机构信息

Department of Neuropathology, Imperial College London, Charing Cross Hospital, St. Dunstan's Road, London, W6 8RP, UK.

出版信息

Eur J Nucl Med Mol Imaging. 2004 May;31(5):725-32. doi: 10.1007/s00259-003-1387-7. Epub 2004 Jan 17.

Abstract

Statistics is nowadays the customary language of functional imaging. It is common to express an experimental setting as a set of null hypotheses over complex models and to present results as maps of p-values derived from sophisticated probability distributions. However, the growing interest in the development of advanced statistical algorithms is not always paralleled by similar attention to how these techniques may regiment the ways in which users draw inferences from their data. This article investigates the logical bases of current statistical approaches in functional imaging and probes their suitability to inductive inference in neuroscience. The frequentist approach to statistical inference is reviewed with attention to its two main constituents: Fisherian "significance testing" and Neyman-Pearson "hypothesis testing". It is shown that these conceptual systems, which are similar in the univariate testing case, dissociate into two quite different methods of inference when applied to the multiple testing problem, the typical framework of functional imaging. This difference is explained with reference to specific issues, like small volume correction, which are most likely to generate confusion in the practitioner. Further insight into this problem is achieved by recasting the multiple comparison problem into a multivariate Bayesian formulation. This formulation introduces a new perspective where the inferential process is more clearly defined in two distinct steps. The first one, inductive in form, uses exploratory techniques to acquire preliminary notions on the spatial patterns and the signal and noise characteristics. The (smaller) set of likely spatial patterns generated is then tested with newer data and a more rigorous multiple hypothesis testing technique (deductive step).

摘要

统计学如今是功能成像的常用语言。将实验设置表示为一组关于复杂模型的零假设,并将结果呈现为从复杂概率分布导出的p值映射,这是很常见的。然而,对先进统计算法开发的兴趣日益浓厚,并不总是伴随着对这些技术如何规范用户从数据中得出推断的方式给予同样的关注。本文研究了功能成像中当前统计方法的逻辑基础,并探讨了它们对神经科学归纳推断的适用性。回顾了统计推断的频率主义方法,并关注其两个主要组成部分:费希尔的“显著性检验”和奈曼 - 皮尔逊的“假设检验”。结果表明,这些概念体系在单变量检验情况下相似,但在应用于多重检验问题(功能成像的典型框架)时,会分解为两种截然不同的推断方法。这种差异通过参考特定问题(如小体积校正)来解释,这些问题最有可能在从业者中引起混淆。通过将多重比较问题重塑为多元贝叶斯公式,对这个问题有了进一步的深入理解。这种公式引入了一个新的视角,其中推断过程在两个不同的步骤中得到更清晰的定义。第一步,形式上是归纳的,使用探索性技术来获取关于空间模式以及信号和噪声特征的初步概念。然后,用更新的数据和更严格的多重假设检验技术(演绎步骤)对生成的(较小的)一组可能的空间模式进行检验。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验