HMM for discovering decision-making dynamics using reinforcement learning experiments

被引:0
|
作者
Guo, Xingche [1 ]
Zeng, Donglin [2 ]
Wang, Yuanjia [1 ,3 ]
机构
[1] Columbia Univ, Dept Biostat, 722 West 168th St, New York, NY 10032 USA
[2] Univ Michigan, Dept Biostat, 1415 Washington Hts, Ann Arbor, MI 48109 USA
[3] Columbia Univ, Dept Psychiat, 1051 Riverside Dr, New York, NY 10032 USA
基金
美国国家卫生研究院;
关键词
behavioral phenotyping; brain-behavior association; mental health; reinforcement learning; reward tasks; state-switching; PSYCHIATRY; TASK;
D O I
10.1093/biostatistics/kxae033
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Major depressive disorder (MDD), a leading cause of years of life lived with disability, presents challenges in diagnosis and treatment due to its complex and heterogeneous nature. Emerging evidence indicates that reward processing abnormalities may serve as a behavioral marker for MDD. To measure reward processing, patients perform computer-based behavioral tasks that involve making choices or responding to stimulants that are associated with different outcomes, such as gains or losses in the laboratory. Reinforcement learning (RL) models are fitted to extract parameters that measure various aspects of reward processing (e.g. reward sensitivity) to characterize how patients make decisions in behavioral tasks. Recent findings suggest the inadequacy of characterizing reward learning solely based on a single RL model; instead, there may be a switching of decision-making processes between multiple strategies. An important scientific question is how the dynamics of strategies in decision-making affect the reward learning ability of individuals with MDD. Motivated by the probabilistic reward task within the Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care (EMBARC) study, we propose a novel RL-HMM (hidden Markov model) framework for analyzing reward-based decision-making. Our model accommodates decision-making strategy switching between two distinct approaches under an HMM: subjects making decisions based on the RL model or opting for random choices. We account for continuous RL state space and allow time-varying transition probabilities in the HMM. We introduce a computationally efficient Expectation-maximization (EM) algorithm for parameter estimation and use a nonparametric bootstrap for inference. Extensive simulation studies validate the finite-sample performance of our method. We apply our approach to the EMBARC study to show that MDD patients are less engaged in RL compared to the healthy controls, and engagement is associated with brain activities in the negative affect circuitry during an emotional conflict task.
引用
收藏
页数:16
相关论文
共 50 条
  • [11] Application of Reinforcement Learning in Multiagent Intelligent Decision-Making
    Han, Xiaoyu
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [12] Extracting the dynamics of behavior in sensory decision-making experiments
    Roy, Nicholas A.
    Bak, Ji Hyun
    Akrami, Athena
    Brody, Carlos D.
    Pillow, Jonathan W.
    NEURON, 2021, 109 (04) : 597 - 610.e6
  • [13] A reinforcement learning approach to irrigation decision-making for rice using weather forecasts
    Chen, Mengting
    Cui, Yuanlai
    Wang, Xiaonan
    Xie, Hengwang
    Liu, Fangping
    Luo, Tongyuan
    Zheng, Shizong
    Luo, Yufeng
    AGRICULTURAL WATER MANAGEMENT, 2021, 250
  • [14] Decision-Making Strategy on Highway for Autonomous Vehicles Using Deep Reinforcement Learning
    Liao, Jiangdong
    Liu, Teng
    Tang, Xiaolin
    Mu, Xingyu
    Huang, Bing
    Cao, Dongpu
    IEEE ACCESS, 2020, 8 (08): : 177804 - 177814
  • [15] A reinforcement learning approach to irrigation decision-making for rice using weather forecasts
    Chen, Mengting
    Cui, Yuanlai
    Wang, Xiaonan
    Xie, Hengwang
    Liu, Fangping
    Luo, Tongyuan
    Zheng, Shizong
    Luo, Yufeng
    Agricultural Water Management, 2021, 250
  • [16] Negotiable Reinforcement Learning for Pareto Optimal Sequential Decision-Making
    Desai, Nishant
    Critch, Andrew
    Russell, Stuart
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [17] Decision-making models on perceptual uncertainty with distributional reinforcement learning
    Xu, Shuyuan
    Liu, Qiao
    Hu, Yuhui
    Xu, Mengtian
    Hao, Jiachen
    GREEN ENERGY AND INTELLIGENT TRANSPORTATION, 2023, 2 (02):
  • [18] Cognitive Reinforcement Learning: An Interpretable Decision-Making for Virtual Driver
    Qi, Hao
    Hou, Enguang
    Ye, Peijun
    IEEE JOURNAL OF RADIO FREQUENCY IDENTIFICATION, 2024, 8 : 627 - 631
  • [19] Reinforcement Learning with Uncertainty Estimation for Tactical Decision-Making in Intersections
    Hoel, Carl-Johan
    Tram, Tommy
    Sjoberg, Jonas
    2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [20] A Multiple-Attribute Decision-Making Approach to Reinforcement Learning
    Shi, Haobin
    Xu, Meng
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2020, 12 (04) : 695 - 708