Brain-Machine Interface Control of a Robot Arm using Actor-Critic Reinforcement Learning

被引:0
|
作者
Pohlmeyer, Eric A. [1 ]
Mahmoudi, Babak [1 ]
Geng, Shijia [1 ]
Prins, Noeine [1 ]
Sanchez, Justin C. [1 ]
机构
[1] Miami Univ, Dept Biomed Engn, Coral Gables, FL 33146 USA
关键词
COMPUTER INTERFACE; MOVEMENT SIGNAL; RULE;
D O I
暂无
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortex to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94 %) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
引用
收藏
页码:4108 / 4111
页数:4
相关论文
共 50 条
  • [41] A Soft Actor-Critic Deep Reinforcement-Learning-Based Robot Navigation Method Using LiDAR
    Liu, Yanjie
    Wang, Chao
    Zhao, Changsen
    Wu, Heng
    Wei, Yanlong
    REMOTE SENSING, 2024, 16 (12)
  • [42] Deep Actor-Critic Reinforcement Learning for Anomaly Detection
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [43] MARS: Malleable Actor-Critic Reinforcement Learning Scheduler
    Baheri, Betis
    Tronge, Jacob
    Fang, Bo
    Li, Ang
    Chaudhary, Vipin
    Guan, Qiang
    2022 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE, IPCCC, 2022,
  • [44] Averaged Soft Actor-Critic for Deep Reinforcement Learning
    Ding, Feng
    Ma, Guanfeng
    Chen, Zhikui
    Gao, Jing
    Li, Peng
    COMPLEXITY, 2021, 2021
  • [45] Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with Deep Reinforcement Learning
    Morgan, Andrew S.
    Nandha, Daljeet
    Chalvatzaki, Georgia
    D'Eramo, Carlo
    Dollar, Aaron M.
    Peters, Jan
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 6672 - 6678
  • [46] Symmetric actor-critic deep reinforcement learning for cascade quadrotor flight control
    Han, Haoran
    Cheng, Jian
    Xi, Zhilong
    Lv, Maolong
    NEUROCOMPUTING, 2023, 559
  • [47] Adaptive Assist-as-needed Control Based on Actor-Critic Reinforcement Learning
    Zhang, Yufeng
    Li, Shuai
    Nolan, Karen J.
    Zanotto, Damiano
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4066 - 4071
  • [48] Learning robot stiffness for contact tasks using the Natural Actor-Critic
    Kim, Byungchan
    Kang, Byungduk
    Park, Shinsuk
    Kang, Sungchul
    2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9, 2008, : 3832 - +
  • [49] Design of Observer-Based Control With Residual Generator Using Actor-Critic Reinforcement Learning
    Qian L.
    Zhao X.
    Liu P.
    Zhang Z.
    Lv Y.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (04): : 734 - 743
  • [50] Actor-Critic Reinforcement Learning for Linear Longitudinal Output Control of a Road Vehicle
    Puccetti, Luca
    Rathgeber, Christian
    Hohmann, Soeren
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 2907 - 2913