Neuroimaging Research: From Null-Hypothesis Falsification to Out-of-Sample Generalization

被引:6
|
作者
Bzdok, Danilo [1 ,2 ,3 ]
Varoquaux, Gael [3 ]
Thirion, Bertrand [3 ]
机构
[1] Rhein Westfal TH Aachen, Dept Psychiat Psychotherapy & Psychosomat, Pauwelsstr 30, D-52074 Aachen, Germany
[2] JARA, Translat Brain Med, Aachen, Germany
[3] INRIA, Neurospin, Paris, France
关键词
neuroscience; statistical inference; epistemology; hypothesis testing; cross-validation; UNREASONABLE EFFECTIVENESS; CONNECTIVITY; SELECTION; BIG; THINGS;
D O I
10.1177/0013164416667982
中图分类号
G44 [教育心理学];
学科分类号
0402 ; 040202 ;
摘要
Brain-imaging technology has boosted the quantification of neurobiological phenomena underlying human mental operations and their disturbances. Since its inception, drawing inference on neurophysiological effects hinged on classical statistical methods, especially, the general linear model. The tens of thousands of variables per brain scan were routinely tackled by independent statistical tests on each voxel. This circumvented the curse of dimensionality in exchange for neurobiologically imperfect observation units, a challenging multiple comparisons problem, and limited scaling to currently growing data repositories. Yet, the always bigger information granularity of neuroimaging data repositories has lunched a rapidly increasing adoption of statistical learning algorithms. These scale naturally to high-dimensional data, extract models from data rather than prespecifying them, and are empirically evaluated for extrapolation to unseen data. The present article portrays commonalities and differences between long-standing classical inference and upcoming generalization inference relevant for conducting neuroimaging research.
引用
收藏
页码:868 / 880
页数:13
相关论文
共 48 条