A new look at state-space models for neural data

被引:133
|
作者
Paninski, Liam [1 ,2 ]
Ahmadian, Yashar [1 ,2 ]
Ferreira, Daniel Gil [1 ,2 ]
Koyama, Shinsuke [3 ]
Rad, Kamiar Rahnama [1 ,2 ]
Vidne, Michael [1 ,2 ]
Vogelstein, Joshua [4 ]
Wu, Wei [5 ]
机构
[1] Columbia Univ, Dept Stat, New York, NY 10027 USA
[2] Columbia Univ, Ctr Theoret Neurosci, New York, NY USA
[3] Carnegie Mellon Univ, Dept Stat, Pittsburgh, PA 15213 USA
[4] Johns Hopkins Univ, Dept Neurosci, Baltimore, MD USA
[5] Florida State Univ, Dept Stat, Tallahassee, FL 32306 USA
关键词
Neural coding; State-space models; Hidden Markov model; Tridiagonal matrix; RECEPTIVE-FIELD PLASTICITY; HIDDEN MARKOV-MODELS; MAXIMUM-LIKELIHOOD; FIRING PATTERNS; DYNAMIC-ANALYSIS; POPULATION; INHIBITION; RECORDINGS; FRAMEWORK; NETWORK;
D O I
10.1007/s10827-009-0179-x
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation. and inference of synaptic properties. Along the way, we point out connections to sonic related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.
引用
收藏
页码:107 / 126
页数:20
相关论文
共 50 条
  • [21] Inequality Constrained State-Space Models
    Qian, Hang
    JOURNAL OF BUSINESS & ECONOMIC STATISTICS, 2019, 37 (02) : 350 - 362
  • [22] DISTURBANCE SMOOTHER FOR STATE-SPACE MODELS
    KOOPMAN, SJ
    BIOMETRIKA, 1993, 80 (01) : 117 - 126
  • [23] Probabilistic Recurrent State-Space Models
    Doerr, Andreas
    Daniel, Christian
    Schiegg, Martin
    Nguyen-Tuong, Duy
    Schaal, Stefan
    Toussaint, Marc
    Trimpe, Sebastian
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [24] State-space estimation with uncertain models
    Sayed, AH
    Subramanian, A
    TOTAL LEAST SQUARES AND ERRORS-IN-VARIABLES MODELING: ANALYSIS, ALGORITHMS AND APPLICATIONS, 2002, : 191 - 202
  • [25] Identification of structured state-space models
    Yu, Chengpu
    Ljung, Lennart
    Verhaegen, Michel
    AUTOMATICA, 2018, 90 : 54 - 61
  • [26] Approximate Methods for State-Space Models
    Koyama, Shinsuke
    Perez-Bolde, Lucia Castellanos
    Shalizi, Cosma Rohilla
    Kass, Robert E.
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2010, 105 (489) : 170 - 180
  • [27] State-space models for control and identification
    Raynaud, HF
    Kulcsár, C
    Hammi, R
    ADVANCES IN COMMUNICATION CONTROL NETWORKS, 2005, 308 : 177 - 197
  • [28] Smoothing algorithms for state-space models
    Briers, Mark
    Doucet, Arnaud
    Maskell, Simon
    ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 2010, 62 (01) : 61 - 89
  • [29] Robust quasi-LPV control based on neural state-space models
    Bendtsen, JD
    Trangbæk, K
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (02): : 355 - 368
  • [30] Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces
    Malik, Wasim Q.
    Hochberg, Leigh R.
    Donoghue, John P.
    Brown, Emery N.
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2015, 62 (02) : 570 - 581