HMM for discovering decision-making dynamics using reinforcement learning experiments

被引:0
|
作者
Guo, Xingche [1 ]
Zeng, Donglin [2 ]
Wang, Yuanjia [1 ,3 ]
机构
[1] Columbia Univ, Dept Biostat, 722 West 168th St, New York, NY 10032 USA
[2] Univ Michigan, Dept Biostat, 1415 Washington Hts, Ann Arbor, MI 48109 USA
[3] Columbia Univ, Dept Psychiat, 1051 Riverside Dr, New York, NY 10032 USA
基金
美国国家卫生研究院;
关键词
behavioral phenotyping; brain-behavior association; mental health; reinforcement learning; reward tasks; state-switching; PSYCHIATRY; TASK;
D O I
10.1093/biostatistics/kxae033
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Major depressive disorder (MDD), a leading cause of years of life lived with disability, presents challenges in diagnosis and treatment due to its complex and heterogeneous nature. Emerging evidence indicates that reward processing abnormalities may serve as a behavioral marker for MDD. To measure reward processing, patients perform computer-based behavioral tasks that involve making choices or responding to stimulants that are associated with different outcomes, such as gains or losses in the laboratory. Reinforcement learning (RL) models are fitted to extract parameters that measure various aspects of reward processing (e.g. reward sensitivity) to characterize how patients make decisions in behavioral tasks. Recent findings suggest the inadequacy of characterizing reward learning solely based on a single RL model; instead, there may be a switching of decision-making processes between multiple strategies. An important scientific question is how the dynamics of strategies in decision-making affect the reward learning ability of individuals with MDD. Motivated by the probabilistic reward task within the Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care (EMBARC) study, we propose a novel RL-HMM (hidden Markov model) framework for analyzing reward-based decision-making. Our model accommodates decision-making strategy switching between two distinct approaches under an HMM: subjects making decisions based on the RL model or opting for random choices. We account for continuous RL state space and allow time-varying transition probabilities in the HMM. We introduce a computationally efficient Expectation-maximization (EM) algorithm for parameter estimation and use a nonparametric bootstrap for inference. Extensive simulation studies validate the finite-sample performance of our method. We apply our approach to the EMBARC study to show that MDD patients are less engaged in RL compared to the healthy controls, and engagement is associated with brain activities in the negative affect circuitry during an emotional conflict task.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Unveiling the Decision-Making Process in Reinforcement Learning with Genetic Programming
    Eberhardinger, Manuel
    Rupp, Florian
    Maucher, Johannes
    Maghsudi, Setareh
    ADVANCES IN SWARM INTELLIGENCE, PT I, ICSI 2024, 2024, 14788 : 349 - 365
  • [22] Intrusion Response Decision-making Method Based on Reinforcement Learning
    Yang, Jun-nan
    Zhang, Hong-qi
    Zhang, Chuan-fu
    2018 INTERNATIONAL CONFERENCE ON COMMUNICATION, NETWORK AND ARTIFICIAL INTELLIGENCE (CNAI 2018), 2018, : 154 - 162
  • [23] Research on Decision-Making in Emotional Agent Based on Reinforcement Learning
    Feng Chao
    Chen Lin
    Jiang Kui
    Wei Zhonglin
    Zhai Bing
    2016 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC), 2016, : 1191 - 1194
  • [24] Historical Decision-Making Regularized Maximum Entropy Reinforcement Learning
    Dong, Botao
    Huang, Longyang
    Pang, Ning
    Chen, Hongtian
    Zhang, Weidong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [25] Reinforcement learning applied to a situation awareness decision-making model
    Costa, Renato D.
    Hirata, Celso M.
    INFORMATION SCIENCES, 2025, 704
  • [26] Decision-making with Triple Density Awareness for Autonomous Driving using Deep Reinforcement Learning
    Zhang, Shuwei
    Wu, Yutian
    Ogai, Harutoshi
    Tateno, Shigeyuki
    2022 IEEE 96TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-FALL), 2022,
  • [27] Autonomous Driving Systems for Decision-Making Under Uncertainty Using Deep Reinforcement Learning
    Haklidir, Mehmet
    Temeltas, Hakan
    2022 30TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU, 2022,
  • [28] A DECISION-MAKING METHOD FOR AUTONOMOUS VEHICLES BASED ON SIMULATION AND REINFORCEMENT LEARNING
    Zheng, Rui
    Liu, Chunming
    Guo, Qi
    PROCEEDINGS OF 2013 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS (ICMLC), VOLS 1-4, 2013, : 362 - 369
  • [29] UAVs Maneuver Decision-Making Method Based on Transfer Reinforcement Learning
    Zhu, Jindong
    Fu, Xiaowei
    Qiao, Zhe
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022 : 2399796
  • [30] Tactical Decision-Making in Autonomous Driving by Reinforcement Learning with Uncertainty Estimation
    Hoel, Carl-Johan
    Wolff, Krister
    Laine, Leo
    2020 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2020, : 1563 - 1569