Control of neural systems at multiple scales using model-free, deep reinforcement learning

被引:13
|
作者
Mitchell, B. A. [1 ]
Petzold, L. R. [1 ,2 ]
机构
[1] Univ Calif Santa Barbara, Dept Comp Sci, Santa Barbara, CA 93106 USA
[2] Univ Calif Santa Barbara, Dept Mech Engn, Santa Barbara, CA 93106 USA
来源
SCIENTIFIC REPORTS | 2018年 / 8卷
关键词
DYNAMICS;
D O I
10.1038/s41598-018-29134-x
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Recent improvements in hardware and data collection have lowered the barrier to practical neural control. Most of the current contributions to the field have focus on model-based control, however, models of neural systems are quite complex and difficult to design. To circumvent these issues, we adapt a model-free method from the reinforcement learning literature, Deep Deterministic Policy Gradients (DDPG). Model-free reinforcement learning presents an attractive framework because of the flexibility it offers, allowing the user to avoid modeling system dynamics. We make use of this feature by applying DDPG to models of low-level and high-level neural dynamics. We show that while model-free, DDPG is able to solve more difficult problems than can be solved by current methods. These problems include the induction of global synchrony by entrainment of weakly coupled oscillators and the control of trajectories through a latent phase space of an underactuated network of neurons. While this work has been performed on simulated systems, it suggests that advances in modern reinforcement learning may enable the solution of fundamental problems in neural control and movement towards more complex objectives in real systems.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Model-free learning adaptive control for nonlinear systems with multiple time delay
    Hu, Z.Q.
    Li, X.D.
    Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 2001, 33 (02): : 261 - 264
  • [42] A Model-Free Deep Reinforcement Learning Approach to Piano Fingering Generation
    Phan, Ananda
    Ahn, Chang Wook
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 31 - 37
  • [43] Deep Reinforcement Learning for Autonomous Model-Free Navigation with Partial Observability
    Tapia, Daniel
    Parras, Juan
    Zazo, Santiago
    2019 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2019,
  • [44] Model-Free Safe Reinforcement Learning Through Neural Barrier Certificate
    Yang, Yujie
    Jiang, Yuxuan
    Liu, Yichen
    Chen, Jianyu
    Li, Shengbo Eben
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (03) : 1295 - 1302
  • [45] Missile Evasion Maneuver Generation with Model-free Deep Reinforcement Learning
    Ozbek, Muhammed Murat
    Koyuncu, Emre
    2023 10TH INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN AIR AND SPACE TECHNOLOGIES, RAST, 2023,
  • [46] Model-free voltage control of active distribution system with PVs using surrogate model-based deep reinforcement learning
    Cao, Di
    Zhao, Junbo
    Hu, Weihao
    Ding, Fei
    Yu, Nanpeng
    Huang, Qi
    Chen, Zhe
    APPLIED ENERGY, 2022, 306
  • [47] Model-free distributed optimal control for general discrete-time linear systems using reinforcement learning
    Feng, Xinjun
    Zhao, Zhiyun
    Yang, Wen
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2024, 34 (09) : 5570 - 5589
  • [48] Model-free Optimal Coordinated Control for Rigidly Connected Dual-motor Systems Using Reinforcement Learning
    Yang C.
    Wang H.
    Zhao J.
    Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering, 2024, 44 (09): : 3691 - 3701
  • [49] Towards self-learning control of HVAC systems with the consideration of dynamic occupancy patterns: Application of model-free deep reinforcement learning
    Esrafilian-Najafabadi, Mohammad
    Haghighat, Fariborz
    BUILDING AND ENVIRONMENT, 2022, 226
  • [50] Resource management of cloud-enabled systems using model-free reinforcement learning
    Yue Jin
    Makram Bouzid
    Dimitre Kostadinov
    Armen Aghasaryan
    Annals of Telecommunications, 2019, 74 : 625 - 636