Common component classification: What can we learn from machine learning?

被引:2
|
作者
Anderson, Ariana [1 ,2 ]
Labus, Jennifer S. [2 ,3 ,4 ]
Vianna, Eduardo P. [2 ,3 ,4 ]
Mayer, Emeran A. [2 ,3 ,4 ]
Cohen, Mark S. [1 ,2 ]
机构
[1] Univ Calif Los Angeles, Ctr Cognit Neurosci, David Geffen Sch Med, Los Angeles, CA 90095 USA
[2] Univ Calif Los Angeles, David Geffen Sch Med, Dept Psychiat & Behav Sci, Los Angeles, CA 90095 USA
[3] Univ Calif Los Angeles, David Geffen Sch Med, Ctr Neurobiol Stress, Los Angeles, CA 90095 USA
[4] Univ Calif Los Angeles, David Geffen Sch Med, Brain Res Inst, Los Angeles, CA 90095 USA
基金
美国国家卫生研究院;
关键词
Classification; Discrimination; fMRI; Bias; Machine learning; Independent components analysis; Cross-validation; Irritable bowel; FUNCTIONAL MRI; FMRI;
D O I
10.1016/j.neuroimage.2010.05.065
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Machine learning methods have been applied to classifying fMRI scans by studying locations in the brain that exhibit temporal intensity variation between groups, frequently reporting classification accuracy of 90% or better. Although empirical results are quite favorable, one might doubt the ability of classification methods to withstand changes in task ordering and the reproducibility of activation patterns over runs, and question how much of the classification machines' power is due to artifactual noise versus genuine neurological signal. To examine the true strength and power of machine learning classifiers we create and then deconstruct a classifier to examine its sensitivity to physiological noise, task reordering, and across-scan classification ability. The models are trained and tested both within and across runs to assess stability and reproducibility across conditions. We demonstrate the use of independent components analysis for both feature extraction and artifact removal and show that removal of such artifacts can reduce predictive accuracy even when data has been cleaned in the preprocessing stages. We demonstrate how mistakes in the feature selection process can cause the cross-validation error seen in publication to be a biased estimate of the testing error seen in practice and measure this bias by purposefully making flawed models. We discuss other ways to introduce bias and the statistical assumptions lying behind the data and model themselves. Finally we discuss the complications in drawing inference from the smaller sample sizes typically seen in fMRI studies, the effects of small or unbalanced samples on the Type 1 and Type 2 error rates, and how publication bias can give a false confidence of the power of such methods. Collectively this work identifies challenges specific to fMRI classification and methods affecting the stability of models. (C) 2010 Elsevier Inc. All rights reserved.
引用
收藏
页码:517 / 524
页数:8
相关论文
共 50 条
  • [1] WHAT WE CAN LEARN FROM INVERTEBRATE LEARNING
    KRASNE, FB
    GLANZMAN, DL
    ANNUAL REVIEW OF PSYCHOLOGY, 1995, 46 : 585 - 624
  • [2] Multiscale Modeling Meets Machine Learning: What Can We Learn?
    Peng, Grace C. Y.
    Alber, Mark
    Tepole, Adrian Buganza
    Cannon, William
    De, Suvranu
    Dura-Bernal, Salvador
    Garikipati, Krishna
    Karniadakis, George
    Lytton, William W.
    Perdikaris, Paris
    Petzold, Linda
    Kuhl, Ellen
    ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING, 2021, 28 (03) : 1017 - 1037
  • [3] Multiscale Modeling Meets Machine Learning: What Can We Learn?
    Grace C. Y. Peng
    Mark Alber
    Adrian Buganza Tepole
    William R. Cannon
    Suvranu De
    Savador Dura-Bernal
    Krishna Garikipati
    George Karniadakis
    William W. Lytton
    Paris Perdikaris
    Linda Petzold
    Ellen Kuhl
    Archives of Computational Methods in Engineering, 2021, 28 : 1017 - 1037
  • [4] What Teachers Can Learn from Machine Learning
    Servin, Christian
    Kosheleva, Olga
    Kreinovich, Vladik
    EXPLAINABLE AI AND OTHER APPLICATIONS OF FUZZY TECHNIQUES, NAFIPS 2021, 2022, 258 : 400 - 405
  • [5] What can we learn from what a machine has learned? Interpreting credit risk machine learning models
    Bharodia, Nehalkumar
    Chen, Wei
    JOURNAL OF RISK MODEL VALIDATION, 2021, 15 (02): : 1 - 22
  • [6] What Can We Learn from the RSNA Pediatric Bone Age Machine Learning Challenge?
    Siegel, Eliot L.
    RADIOLOGY, 2019, 290 (02) : 504 - 505
  • [7] Finance research over 40 years: What can we learn from machine learning?
    Liu, Po-Yu
    Wang, Zigan
    INTERNATIONAL STUDIES OF ECONOMICS, 2024, 19 (04): : 472 - 507
  • [8] What Machine Learning Can Learn From Software Modularity
    Kriens, Peter
    Verbelen, Tim
    COMPUTER, 2022, 55 (09) : 35 - 42
  • [9] WHAT CAN WE LEARN FROM THAT
    JULIANO, C
    TRUESWELL, JC
    TANENHAUS, MK
    BULLETIN OF THE PSYCHONOMIC SOCIETY, 1992, 30 (06) : 473 - 473
  • [10] What can we learn from it?
    Bogaerts, A
    Gijbels, R
    ANALYTICAL CHEMISTRY, 1997, 69 (23) : A719 - A727