HMM for discovering decision-making dynamics using reinforcement learning experiments

被引:0
|
作者
Guo, Xingche [1 ]
Zeng, Donglin [2 ]
Wang, Yuanjia [1 ,3 ]
机构
[1] Columbia Univ, Dept Biostat, 722 West 168th St, New York, NY 10032 USA
[2] Univ Michigan, Dept Biostat, 1415 Washington Hts, Ann Arbor, MI 48109 USA
[3] Columbia Univ, Dept Psychiat, 1051 Riverside Dr, New York, NY 10032 USA
基金
美国国家卫生研究院;
关键词
behavioral phenotyping; brain-behavior association; mental health; reinforcement learning; reward tasks; state-switching; PSYCHIATRY; TASK;
D O I
10.1093/biostatistics/kxae033
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Major depressive disorder (MDD), a leading cause of years of life lived with disability, presents challenges in diagnosis and treatment due to its complex and heterogeneous nature. Emerging evidence indicates that reward processing abnormalities may serve as a behavioral marker for MDD. To measure reward processing, patients perform computer-based behavioral tasks that involve making choices or responding to stimulants that are associated with different outcomes, such as gains or losses in the laboratory. Reinforcement learning (RL) models are fitted to extract parameters that measure various aspects of reward processing (e.g. reward sensitivity) to characterize how patients make decisions in behavioral tasks. Recent findings suggest the inadequacy of characterizing reward learning solely based on a single RL model; instead, there may be a switching of decision-making processes between multiple strategies. An important scientific question is how the dynamics of strategies in decision-making affect the reward learning ability of individuals with MDD. Motivated by the probabilistic reward task within the Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care (EMBARC) study, we propose a novel RL-HMM (hidden Markov model) framework for analyzing reward-based decision-making. Our model accommodates decision-making strategy switching between two distinct approaches under an HMM: subjects making decisions based on the RL model or opting for random choices. We account for continuous RL state space and allow time-varying transition probabilities in the HMM. We introduce a computationally efficient Expectation-maximization (EM) algorithm for parameter estimation and use a nonparametric bootstrap for inference. Extensive simulation studies validate the finite-sample performance of our method. We apply our approach to the EMBARC study to show that MDD patients are less engaged in RL compared to the healthy controls, and engagement is associated with brain activities in the negative affect circuitry during an emotional conflict task.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Improved deep reinforcement learning for car-following decision-making
    Yang, Xiaoxue
    Zou, Yajie
    Zhang, Hao
    Qu, Xiaobo
    Chen, Lei
    PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2023, 624
  • [32] Constraints Driven Safe Reinforcement Learning for Autonomous Driving Decision-Making
    Gao, Fei
    Wang, Xiaodong
    Fan, Yuze
    Gao, Zhenhai
    Zhao, Rui
    IEEE ACCESS, 2024, 12 : 128007 - 128023
  • [33] Collaborative decision-making for UAV swarm confrontation based on reinforcement learning
    Jiao, Yongkang
    Fu, Wenxing
    Cao, Xinying
    Su, Qiangqing
    Wang, Yusheng
    Shen, Zixiang
    Yu, Lanlin
    IET CONTROL THEORY AND APPLICATIONS, 2025, 19 (01):
  • [34] Reinforcement Learning Decision-Making for Autonomous Vehicles Based on Semantic Segmentation
    Gao, Jianping
    Liu, Ningbo
    Li, Haotian
    Li, Zhe
    Xie, Chengwei
    Gou, Yangyang
    APPLIED SCIENCES-BASEL, 2025, 15 (03):
  • [35] Deep Reinforcement Learning Enabled Decision-Making for Autonomous Driving at Intersections
    Li, Guofa
    Li, Shenglong
    Li, Shen
    Qin, Yechen
    Cao, Dongpu
    Qu, Xingda
    Cheng, Bo
    AUTOMOTIVE INNOVATION, 2020, 3 (04) : 374 - 385
  • [36] Reinforcement Learning based Lane Change Decision-Making with Imaginary Sampling
    Li, Dong
    Zhao, Dongbin
    Zhang, Qichao
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 16 - 21
  • [37] A reinforcement learning approach to autonomous decision-making in smart electricity markets
    Markus Peters
    Wolfgang Ketter
    Maytal Saar-Tsechansky
    John Collins
    Machine Learning, 2013, 92 : 5 - 39
  • [38] Integration of Reinforcement Learning and Optimal Decision-Making Theories of the Basal Ganglia
    Bogacz, Rafal
    Larsen, Tobias
    NEURAL COMPUTATION, 2011, 23 (04) : 817 - 851
  • [39] Benchmarking Lane-changing Decision-making for Deep Reinforcement Learning
    Wang, Junjie
    Zhang, Qichao
    Zhao, Dongbin
    2021 7TH INTERNATIONAL CONFERENCE ON ROBOTICS AND ARTIFICIAL INTELLIGENCE, ICRAI 2021, 2021, : 26 - 32
  • [40] Reinforcement Learning-Based Intelligent Decision-Making for Communication Parameters
    Xie, Xia
    Dou, Zheng
    Zhang, Yabin
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2022, 16 (09): : 2942 - 2960