MetaRLEC: Meta-Reinforcement Learning for Discovery of Brain Effective Connectivity

被引:0
|
作者
Zhang, Zuozhen [1 ]
Ji, Junzhong [1 ]
Liu, Jinduo [1 ]
机构
[1] Beijing Univ Technol, Beijing Municipal Key Lab Multimedia & Intelligen, Beijing Inst Artificial Intelligence, Fac Informat Technol, Beijing, Peoples R China
来源
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 9 | 2024年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the discovery of brain effective connectivity (EC) networks through computational analysis of functional magnetic resonance imaging (fMRI) data has gained prominence in neuroscience and neuroimaging. However, owing to the influence of diverse factors during data collection and processing, fMRI data typically exhibit high noise and limited sample characteristics, consequently leading to the suboptimal performance of current methods. In this paper, we propose a novel brain effective connectivity discovery method based on meta-reinforcement learning, called MetaRLEC. The method mainly consists of three modules: actor, critic, and meta-critic. MetaRLEC first employs an encoder-decoder framework: The encoder utilizing a transformer converts noisy fMRI data into a state embedding, and the decoder employing bidirectional LSTM discovers brain region dependencies from the state and generates actions (EC networks). Then, a critic network evaluates these actions, incentivizing the actor to learn higher-reward actions amidst the high-noise setting. Finally, a meta-critic framework facilitates online learning of historical state-action pairs, integrating an action-value neural network and supplementary training losses to enhance the model's adaptability to small-sample fMRI data. We conduct comprehensive experiments on both simulated and real-world data to demonstrate the efficacy of our proposed method.
引用
收藏
页码:10261 / 10269
页数:9
相关论文
共 50 条
  • [21] Meta-reinforcement learning for edge caching in vehicular networks
    Sakr H.
    Elsabrouty M.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (04) : 4607 - 4619
  • [22] Wireless Power Control via Meta-Reinforcement Learning
    Lu, Ziyang
    Gursoy, M. Cenk
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 1562 - 1567
  • [23] Prioritized Hindsight with Dual Buffer for Meta-Reinforcement Learning
    Beyene, Sofanit Wubeshet
    Han, Ji-Hyeong
    ELECTRONICS, 2022, 11 (24)
  • [24] Doubly Robust Augmented Transfer for Meta-Reinforcement Learning
    Jiang, Yuankun
    Kan, Nuowen
    Li, Chenglin
    Dai, Wenrui
    Zou, Junni
    Xiong, Hongkai
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [25] PAC-Bayesian offline Meta-reinforcement learning
    Sun, Zheng
    Jing, Chenheng
    Guo, Shangqi
    An, Lingling
    APPLIED INTELLIGENCE, 2023, 53 (22) : 27128 - 27147
  • [26] Meta-Reinforcement Learning for Multiple Traffic Signals Control
    Lou, Yican
    Wu, Jia
    Ran, Yunchuan
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4264 - 4268
  • [27] Dynamic Channel Access via Meta-Reinforcement Learning
    Lu, Ziyang
    Gursoy, M. Cenk
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [28] PAC-Bayesian offline Meta-reinforcement learning
    Zheng Sun
    Chenheng Jing
    Shangqi Guo
    Lingling An
    Applied Intelligence, 2023, 53 : 27128 - 27147
  • [29] Meta-Reinforcement Learning via Exploratory Task Clustering
    Chu, Zhendong
    Cai, Renqin
    Wang, Hongning
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 11633 - 11641
  • [30] GlobalLocal Decomposition of Contextual Representations in Meta-Reinforcement Learning
    Ma, Nelson
    Xuan, Junyu
    Zhang, Guangquan
    Lu, Jie
    IEEE TRANSACTIONS ON CYBERNETICS, 2025, 55 (03) : 1277 - 1287