Model-Based Offline Reinforcement Learning for Autonomous Delivery of Guidewire

被引:0
|
作者
Li, Hao [1 ]
Zhou, Xiao-Hu [1 ]
Xie, Xiao-Liang [1 ]
Liu, Shi-Qi [1 ]
Feng, Zhen-Qiu [1 ]
Gui, Mei-Jiang [1 ]
Xiang, Tian-Yu [1 ]
Huang, De-Xing [1 ]
Hou, Zeng-Guang [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence, Beijing 100190, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Data models; Training; Arteries; Reinforcement learning; Instruments; Catheters; Predictive models; Offline reinforcement learning; deep neural network; vascular robotic system; robot assisted intervention; PERCUTANEOUS CORONARY INTERVENTION;
D O I
10.1109/TMRB.2024.3407349
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Guidewire delivery is a fundamental procedure in percutaneous coronary intervention. The inherent flexibility of the guidewire poses challenges in precise control, necessitating long-term training and substantial expertise. In response, this paper proposes a novel offline reinforcement learning (RL) algorithm, Conservative Offline Reinforcement Learning with Variational Environment Model (CORVE), for autonomous delivery of guidewire. CORVE first uses offline data to train an environment model and then optimizes the policy with both offline and model-generated data. The proposed method shares an encoder between the environmental model, policy, and Q-function, mitigating the common sample inefficiency in image-based RL. Besides, CORVE utilizes model prediction errors to forecast wrong deliveries in inference, which is an attribute absent in existing methods. The experimental results show that CORVE obtains superior performance in guidewire deliveries, achieving notably higher success rates and smoother movements than existing methods. These findings suggest that CORVE holds significant potential for enhancing the autonomy of vascular robotic systems in clinical settings.
引用
收藏
页码:1054 / 1062
页数:9
相关论文
共 50 条
  • [21] Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning
    Yang, Shentao
    Feng, Yihao
    Zhang, Shujian
    Zhou, Mingyuan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [22] Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief
    Guo, Kaiyang
    Shao, Yunfeng
    Geng, Yanhui
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [23] Model-Based Reinforcement Learning for Offline Zero-Sum Markov Games
    Yan, Yuling
    Li, Gen
    Chen, Yuxin
    Fan, Jianqing
    OPERATIONS RESEARCH, 2024, 72 (06) : 2430 - 2445
  • [24] Multiphase Autonomous Docking via Model-Based and Hierarchical Reinforcement Learning
    Aborizk, Anthony
    Fitz-Coy, Norman
    JOURNAL OF SPACECRAFT AND ROCKETS, 2024, 61 (04) : 993 - 1005
  • [25] Model-based offline reinforcement learning framework for optimizing tunnel boring machine operation
    Cao, Yupeng
    Luo, Wei
    Xue, Yadong
    Lin, Weiren
    Zhang, Feng
    UNDERGROUND SPACE, 2024, 19 : 47 - 71
  • [26] Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning
    Lutter, Michael
    Silberbauer, Johannes
    Watson, Joe
    Peters, Jan
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 4163 - 4170
  • [27] Offline Model-based Adaptable Policy Learning
    Chen, Xiong-Hui
    Yu, Yang
    Li, Qingyang
    Luo, Fan-Ming
    Qin, Zhiwei
    Shang, Wenjie
    Ye, Jieping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [28] AUTONOMOUS PORT NAVIGATION WITH RANGING SENSORS USING MODEL-BASED REINFORCEMENT LEARNING
    Herremans, Siemen
    Anwar, Ali
    Troch, Arne
    Ravijts, Ian
    Vangeneugden, Maarten
    Mercelis, Siegfried
    Hellinckx, Peter
    PROCEEDINGS OF ASME 2023 42ND INTERNATIONAL CONFERENCE ON OCEAN, OFFSHORE & ARCTIC ENGINEERING, OMAE2023, VOL 5, 2023,
  • [29] Importance-Weighted Variational Inference Model Estimation for Offline Bayesian Model-Based Reinforcement Learning
    Hishinuma, Toru
    Senda, Kei
    IEEE ACCESS, 2023, 11 : 145579 - 145590
  • [30] Distributionally Robust Model-Based Offline Reinforcement Learning with Near-Optimal Sample Complexity
    Shi, Laixi
    Chi, Yuejie
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25