MetaRLEC: Meta-Reinforcement Learning for Discovery of Brain Effective Connectivity

被引:0
|
作者
Zhang, Zuozhen [1 ]
Ji, Junzhong [1 ]
Liu, Jinduo [1 ]
机构
[1] Beijing Univ Technol, Beijing Municipal Key Lab Multimedia & Intelligen, Beijing Inst Artificial Intelligence, Fac Informat Technol, Beijing, Peoples R China
来源
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 9 | 2024年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the discovery of brain effective connectivity (EC) networks through computational analysis of functional magnetic resonance imaging (fMRI) data has gained prominence in neuroscience and neuroimaging. However, owing to the influence of diverse factors during data collection and processing, fMRI data typically exhibit high noise and limited sample characteristics, consequently leading to the suboptimal performance of current methods. In this paper, we propose a novel brain effective connectivity discovery method based on meta-reinforcement learning, called MetaRLEC. The method mainly consists of three modules: actor, critic, and meta-critic. MetaRLEC first employs an encoder-decoder framework: The encoder utilizing a transformer converts noisy fMRI data into a state embedding, and the decoder employing bidirectional LSTM discovers brain region dependencies from the state and generates actions (EC networks). Then, a critic network evaluates these actions, incentivizing the actor to learn higher-reward actions amidst the high-noise setting. Finally, a meta-critic framework facilitates online learning of historical state-action pairs, integrating an action-value neural network and supplementary training losses to enhance the model's adaptability to small-sample fMRI data. We conduct comprehensive experiments on both simulated and real-world data to demonstrate the efficacy of our proposed method.
引用
收藏
页码:10261 / 10269
页数:9
相关论文
共 50 条
  • [1] A Meta-Reinforcement Learning Algorithm for Causal Discovery
    Sauter, Andreas
    Acar, Erman
    Francois-Lavet, Vincent
    CONFERENCE ON CAUSAL LEARNING AND REASONING, VOL 213, 2023, 213 : 602 - 619
  • [2] Hypernetworks in Meta-Reinforcement Learning
    Beck, Jacob
    Jackson, Matthew
    Vuorio, Risto
    Whiteson, Shimon
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 1478 - 1487
  • [3] Towards Effective Context for Meta-Reinforcement Learning: an Approach based on Contrastive Learning
    Fu, Haotian
    Tang, Hongyao
    Hao, Jianye
    Chen, Chen
    Feng, Xidong
    Li, Dong
    Liu, Wulong
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 7457 - 7465
  • [4] Prefrontal cortex as a meta-reinforcement learning system
    Jane X. Wang
    Zeb Kurth-Nelson
    Dharshan Kumaran
    Dhruva Tirumala
    Hubert Soyer
    Joel Z. Leibo
    Demis Hassabis
    Matthew Botvinick
    Nature Neuroscience, 2018, 21 : 860 - 868
  • [5] Offline Meta-Reinforcement Learning for Industrial Insertion
    Zhao, Tony Z.
    Luo, Jianlan
    Sushkov, Oleg
    Pevceviciute, Rugile
    Heess, Nicolas
    Scholz, Jon
    Schaal, Stefan
    Levine, Sergey
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 6386 - 6393
  • [6] A Meta-Reinforcement Learning Approach to Process Control
    McClement, Daniel G.
    Lawrence, Nathan P.
    Loewen, Philip D.
    Forbes, Michael G.
    Backstrom, Johan U.
    Gopaluni, R. Bhushan
    IFAC PAPERSONLINE, 2021, 54 (03): : 685 - 692
  • [7] Meta-Reinforcement Learning of Structured Exploration Strategies
    Gupta, Abhishek
    Mendonca, Russell
    Liu, YuXuan
    Abbeel, Pieter
    Levine, Sergey
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [8] Unsupervised Curricula for Visual Meta-Reinforcement Learning
    Jabri, Allan
    Hsu, Kyle
    Eysenbach, Benjamin
    Gupta, Abhishek
    Levine, Sergey
    Finn, Chelsea
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [9] Meta-Reinforcement Learning With Dynamic Adaptiveness Distillation
    Hu, Hangkai
    Huang, Gao
    Li, Xiang
    Song, Shiji
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (03) : 1454 - 1464
  • [10] Formalising Performance Guarantees in Meta-Reinforcement Learning
    Mahony, Amanda
    FORMAL METHODS AND SOFTWARE ENGINEERING, ICFEM 2018, 2018, 11232 : 469 - 472