Control of neural systems at multiple scales using model-free, deep reinforcement learning

被引:13
|
作者
Mitchell, B. A. [1 ]
Petzold, L. R. [1 ,2 ]
机构
[1] Univ Calif Santa Barbara, Dept Comp Sci, Santa Barbara, CA 93106 USA
[2] Univ Calif Santa Barbara, Dept Mech Engn, Santa Barbara, CA 93106 USA
来源
SCIENTIFIC REPORTS | 2018年 / 8卷
关键词
DYNAMICS;
D O I
10.1038/s41598-018-29134-x
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Recent improvements in hardware and data collection have lowered the barrier to practical neural control. Most of the current contributions to the field have focus on model-based control, however, models of neural systems are quite complex and difficult to design. To circumvent these issues, we adapt a model-free method from the reinforcement learning literature, Deep Deterministic Policy Gradients (DDPG). Model-free reinforcement learning presents an attractive framework because of the flexibility it offers, allowing the user to avoid modeling system dynamics. We make use of this feature by applying DDPG to models of low-level and high-level neural dynamics. We show that while model-free, DDPG is able to solve more difficult problems than can be solved by current methods. These problems include the induction of global synchrony by entrainment of weakly coupled oscillators and the control of trajectories through a latent phase space of an underactuated network of neurons. While this work has been performed on simulated systems, it suggests that advances in modern reinforcement learning may enable the solution of fundamental problems in neural control and movement towards more complex objectives in real systems.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Control of neural systems at multiple scales using model-free, deep reinforcement learning
    B. A. Mitchell
    L. R. Petzold
    Scientific Reports, 8
  • [2] Control of a Wave Energy Converter Using Model-free Deep Reinforcement Learning
    Chen, Kemeng
    Huang, Xuanrui
    Lin, Zechuan
    Xiao, Xi
    2024 UKACC 14TH INTERNATIONAL CONFERENCE ON CONTROL, CONTROL, 2024, : 1 - 6
  • [3] Model-Free Control for Distributed Stream Data Processing using Deep Reinforcement Learning
    Li, Teng
    Xu, Zhiyuan
    Tang, Jian
    Wang, Yanzhi
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2018, 11 (06): : 705 - 718
  • [4] Model-free learning control of neutralization processes using reinforcement learning
    Syafiie, S.
    Tadeo, F.
    Martinez, E.
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2007, 20 (06) : 767 - 782
  • [5] Model-Free Load Frequency Control of Nonlinear Power Systems Based on Deep Reinforcement Learning
    Chen, Xiaodi
    Zhang, Meng
    Wu, Zhengguang
    Wu, Ligang
    Guan, Xiaohong
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (04) : 6825 - 6833
  • [6] Linear Quadratic Control Using Model-Free Reinforcement Learning
    Yaghmaie, Farnaz Adib
    Gustafsson, Fredrik
    Ljung, Lennart
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (02) : 737 - 752
  • [7] Model-Free Quantum Control with Reinforcement Learning
    Sivak, V. V.
    Eickbusch, A.
    Liu, H.
    Royer, B.
    Tsioutsios, I
    Devoret, M. H.
    PHYSICAL REVIEW X, 2022, 12 (01)
  • [8] A model-free deep reinforcement learning approach for control of exoskeleton gait patterns
    Rose, Lowell
    Bazzocchi, Michael C. F.
    Nejat, Goldie
    ROBOTICA, 2022, 40 (07) : 2189 - 2214
  • [9] Model-free self-triggered control based on deep reinforcement learning for unknown nonlinear systems
    Wan, Haiying
    Karimi, Hamid Reza
    Luan, Xiaoli
    Liu, Fei
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2023, 33 (03) : 2238 - 2250
  • [10] Model-free Predictive Optimal Iterative Learning Control using Reinforcement Learning
    Zhang, Yueqing
    Chu, Bing
    Shu, Zhan
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 3279 - 3284