Control of neural systems at multiple scales using model-free, deep reinforcement learning

被引:13
|
作者
Mitchell, B. A. [1 ]
Petzold, L. R. [1 ,2 ]
机构
[1] Univ Calif Santa Barbara, Dept Comp Sci, Santa Barbara, CA 93106 USA
[2] Univ Calif Santa Barbara, Dept Mech Engn, Santa Barbara, CA 93106 USA
来源
SCIENTIFIC REPORTS | 2018年 / 8卷
关键词
DYNAMICS;
D O I
10.1038/s41598-018-29134-x
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Recent improvements in hardware and data collection have lowered the barrier to practical neural control. Most of the current contributions to the field have focus on model-based control, however, models of neural systems are quite complex and difficult to design. To circumvent these issues, we adapt a model-free method from the reinforcement learning literature, Deep Deterministic Policy Gradients (DDPG). Model-free reinforcement learning presents an attractive framework because of the flexibility it offers, allowing the user to avoid modeling system dynamics. We make use of this feature by applying DDPG to models of low-level and high-level neural dynamics. We show that while model-free, DDPG is able to solve more difficult problems than can be solved by current methods. These problems include the induction of global synchrony by entrainment of weakly coupled oscillators and the control of trajectories through a latent phase space of an underactuated network of neurons. While this work has been performed on simulated systems, it suggests that advances in modern reinforcement learning may enable the solution of fundamental problems in neural control and movement towards more complex objectives in real systems.
引用
收藏
页数:12
相关论文
共 50 条
  • [22] DATA-DRIVEN MODEL-FREE ITERATIVE LEARNING CONTROL USING REINFORCEMENT LEARNING
    Song, Bing
    Phan, Minh Q.
    Longman, Richard W.
    ASTRODYNAMICS 2018, PTS I-IV, 2019, 167 : 2579 - 2597
  • [23] Model-free MIMO control tuning of a chiller process using reinforcement learning
    Rosdahl, Christian
    Bernhardsson, B. O.
    Eisenhower, Bryan
    SCIENCE AND TECHNOLOGY FOR THE BUILT ENVIRONMENT, 2023, 29 (08) : 782 - 794
  • [24] Model-free Data-driven Predictive Control Using Reinforcement Learning
    Sawant, Shambhuraj
    Reinhardt, Dirk
    Kordabad, Arash Bahari
    Gros, Sebastien
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 4046 - 4052
  • [25] Visibility-enhanced model-free deep reinforcement learning algorithm for voltage control in realistic distribution systems using smart inverters
    Pei, Yansong
    Ye, Ketian
    Zhao, Junbo
    Yao, Yiyun
    Su, Tong
    Ding, Fei
    APPLIED ENERGY, 2024, 372
  • [26] Model-free perimeter metering control for two-region urban networks using deep reinforcement learning
    Zhou, Dongqin
    Gayah, Vikash V.
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2021, 124
  • [27] Perimeter Control Using Deep Reinforcement Learning: A Model-free Approach towards Homogeneous Flow Rate Optimization
    Li, Xiaocan
    Mercurius, Ray Coden
    Taitler, Ayal
    Wang, Xiaoyu
    Noaeen, Mohammad
    Sanner, Scott
    Abdulhai, Baher
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 1474 - 1479
  • [28] Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems
    Xu, Zhenhui
    Shen, Tielong
    Cheng, Daizhan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1520 - 1534
  • [29] Hybrid model-free control based on deep reinforcement learning: An energy-efficient operation strategy for HVAC systems
    Zhang, Xiaoming
    Wang, Xinwei
    Zhang, Haotian
    Ma, Yinghan
    Chen, Shaoye
    Wang, Chenzheng
    Chen, Qili
    Xiao, Xiaoyang
    JOURNAL OF BUILDING ENGINEERING, 2024, 96
  • [30] On Distributed Model-Free Reinforcement Learning Control With Stability Guarantee
    Mukherjee, Sayak
    Vu, Thanh Long
    IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (05): : 1615 - 1620