Neural circuits for learning context-dependent associations of stimuli

被引:6
|
作者
Zhu, Henghui [1 ]
Paschalidis, Ioannis Ch [2 ,3 ]
Hasselmo, Michael E. [4 ]
机构
[1] Boston Univ, Div Syst Engn, 15 St Marys St, Brookline, MA 02446 USA
[2] Boston Univ, Div Syst Engn, Dept Elect & Comp Engn, 8 St Marys St, Boston, MA 02215 USA
[3] Boston Univ, Dept Biomed Engn, 8 St Marys St, Boston, MA 02215 USA
[4] Boston Univ, Ctr Syst Neurosci, Kilachand Ctr Integrated Life Sci & Engn, 610 Commonwealth Ave, Boston, MA 02215 USA
基金
美国国家科学基金会;
关键词
Neural circuit model; Neural networks; Reinforcement learning; ACTOR-CRITIC ALGORITHM; PREFRONTAL CORTEX; COMPUTATIONAL MODEL; WORKING-MEMORY; MECHANISMS; NETWORK; APPROXIMATION;
D O I
10.1016/j.neunet.2018.07.018
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The use of reinforcement learning combined with neural networks provides a powerful framework for solving certain tasks in engineering and cognitive science. Previous research shows that neural networks have the power to automatically extract features and learn hierarchical decision rules. In this work, we investigate reinforcement learning methods for performing a context-dependent association task using two kinds of neural network models (using continuous firing rate neurons), as well as a neural circuit gating model. The task allows examination of the ability of different models to extract hierarchical decision rules and generalize beyond the examples presented to the models in the training phase. We find that the simple neural circuit gating model, trained using response-based regulation of Hebbian associations, performs almost at the same level as a reinforcement learning algorithm combined with neural networks trained with more sophisticated back-propagation of error methods. A potential explanation is that hierarchical reasoning is the key to performance and the specific learning method is less important. (C) 2018 Elsevier Ltd. All rights reserved.
引用
收藏
页码:48 / 60
页数:13
相关论文
共 50 条
  • [31] Context-Dependent "Upper Anchors" for Learning Progressions
    Sikorski, Tiffany-Rose
    SCIENCE & EDUCATION, 2019, 28 (08) : 957 - 981
  • [32] ON REDUCING LEARNING TIME IN CONTEXT-DEPENDENT MAPPINGS
    YEUNG, DY
    BEKEY, GA
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (01): : 31 - 42
  • [33] A Probabilistic Successor Representation for Context-Dependent Learning
    Geerts, Jesse P. P.
    Gershman, Samuel J. J.
    Burgess, Neil
    Stachenfeld, Kimberly L. L.
    PSYCHOLOGICAL REVIEW, 2024, 131 (02) : 578 - 597
  • [34] The Effect of Practice Schedule on Context-Dependent Learning
    Lee, Ya-Yun
    Fisher, Beth E.
    JOURNAL OF MOTOR BEHAVIOR, 2019, 51 (02) : 121 - 128
  • [35] Context-Dependent “Upper Anchors” for Learning Progressions
    Tiffany-Rose Sikorski
    Science & Education, 2019, 28 : 957 - 981
  • [36] CONTEXT-FREE AND CONTEXT-DEPENDENT VOCABULARY LEARNING - AN EXPERIMENT
    PICKERING, M
    SYSTEM, 1982, 10 (01) : 79 - 83
  • [37] Multiple extinction contexts modulate the neural correlates of context-dependent extinction learning and retrieval
    Hermann, Andrea
    Stark, Rudolf
    Muller, Eva A.
    Kruse, Onno
    Wolf, Oliver T.
    Merz, Christian J.
    NEUROBIOLOGY OF LEARNING AND MEMORY, 2020, 168
  • [38] Global context-dependent recurrent neural network language model with sparse feature learning
    Deng, Hongli
    Zhang, Lei
    Wang, Lituan
    NEURAL COMPUTING & APPLICATIONS, 2019, 31 (Suppl 2): : 999 - 1011
  • [39] Low-Energy and Fast Spiking Neural Network For Context-Dependent Learning on FPGA
    Asgari, Hajar
    Maybodi, Babak Mazloom-Nezhad
    Payvand, Melika
    Azghadi, Mostafa Rahimi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2020, 67 (11) : 2697 - 2701
  • [40] Global context-dependent recurrent neural network language model with sparse feature learning
    Hongli Deng
    Lei Zhang
    Lituan Wang
    Neural Computing and Applications, 2019, 31 : 999 - 1011