Enhancing wound healing through deep reinforcement learning for optimal therapeutics

被引:0
|
作者
Lu, Fan [1 ]
Zlobina, Ksenia [1 ]
Rondoni, Nicholas A. [1 ]
Teymoori, Sam [1 ]
Gomez, Marcella [1 ]
机构
[1] Univ Calif Santa Cruz, Baskin Sch Engn, Appl Math, Santa Cruz, CA 95064 USA
来源
ROYAL SOCIETY OPEN SCIENCE | 2024年 / 11卷 / 07期
关键词
deep learning; reinforcement learning; optimal adaptive control; wound healing; optimal treatment regime; CLOSED-LOOP CONTROL; PROPOFOL ANESTHESIA; SYSTEMS; DRUG; APPROXIMATION;
D O I
10.1098/rsos.240228
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Finding the optimal treatment strategy to accelerate wound healing is of utmost importance, but it presents a formidable challenge owing to the intrinsic nonlinear nature of the process. We propose an adaptive closed-loop control framework that incorporates deep learning, optimal control and reinforcement learning to accelerate wound healing. By adaptively learning a linear representation of nonlinear wound healing dynamics using deep learning and interactively training a deep reinforcement learning agent for tracking the optimal signal derived from this representation without the need for intricate mathematical modelling, our approach has not only successfully reduced the wound healing time by 45.56% compared to the one without any treatment, but also demonstrates the advantages of offering a safer and more economical treatment strategy. The proposed methodology showcases a significant potential for expediting wound healing by effectively integrating perception, predictive modelling and optimal adaptive control, eliminating the need for intricate mathematical models.
引用
收藏
页数:17
相关论文
共 50 条
  • [11] Enhancing Conversational Model With Deep Reinforcement Learning and Adversarial Learning
    Tran, Quoc-Dai Luong
    Le, Anh-Cuong
    Huynh, Van-Nam
    IEEE ACCESS, 2023, 11 : 75955 - 75970
  • [12] Deep differentiable reinforcement learning and optimal trading
    Jaisson, Thibault
    QUANTITATIVE FINANCE, 2022, 22 (08) : 1429 - 1443
  • [13] Enhancing Digital Twins through Reinforcement Learning
    Cronrath, Constantin
    Aderiani, Abolfazi R.
    Lennartson, Bengt
    2019 IEEE 15TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2019, : 293 - 298
  • [14] Enhancing cut selection through reinforcement learning
    Wang, Shengchao
    Chen, Liang
    Niu, Lingfeng
    Dai, Yu-Hong
    SCIENCE CHINA-MATHEMATICS, 2024, 67 (06) : 1377 - 1394
  • [15] Enhancing cut selection through reinforcement learning
    Shengchao Wang
    Liang Chen
    Lingfeng Niu
    Yu-Hong Dai
    Science China(Mathematics), 2024, 67 (06) : 1377 - 1394
  • [16] Deep Reinforcement Learning for Optimal Sailing Upwind
    Suda, Takumi
    Nikovski, Daniel
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [17] Quantifying innervation facilitated by deep learning in wound healing
    Abijeet Singh Mehta
    Sam Teymoori
    Cynthia Recendez
    Daniel Fregoso
    Anthony Gallegos
    Hsin-Ya Yang
    Elham Aslankoohi
    Marco Rolandi
    Roslyn Rivkah Isseroff
    Min Zhao
    Marcella Gomez
    Scientific Reports, 13
  • [18] Quantifying innervation facilitated by deep learning in wound healing
    Mehta, Abijeet Singh
    Teymoori, Sam
    Recendez, Cynthia
    Fregoso, Daniel
    Gallegos, Anthony
    Yang, Hsin-Ya
    Aslankoohi, Elham
    Rolandi, Marco
    Isseroff, Roslyn Rivkah
    Zhao, Min
    Gomez, Marcella
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [19] Enhancing wound healing dressing development through interdisciplinary collaboration
    Hawthorne, Briauna
    Simmons, J. Kai
    Stuart, Braden
    Tung, Robert
    Zamierowski, David S.
    Mellott, Adam J.
    JOURNAL OF BIOMEDICAL MATERIALS RESEARCH PART B-APPLIED BIOMATERIALS, 2021, 109 (12) : 1967 - 1985
  • [20] Learning Mobile Manipulation through Deep Reinforcement Learning
    Wang, Cong
    Zhang, Qifeng
    Tian, Qiyan
    Li, Shuo
    Wang, Xiaohui
    Lane, David
    Petillot, Yvan
    Wang, Sen
    SENSORS, 2020, 20 (03)