Research on resource allocation methods for traditional Chinese medicine services based on deep reinforcement learning

被引:0
|
作者
Yuntao Ma [1 ]
Xiaolin Fang [1 ]
Jin Qi [2 ]
Yanfei Sun [2 ]
机构
[1] Southeast University,School of Computer Science and Engineering
[2] Nanjing University of Posts and Telecommunications,School of Internet of Things
关键词
Traditional Chinese medicine service; Deep reinforcement learning; Resource allocation; Resource-demand matching;
D O I
10.1007/s00521-024-10579-3
中图分类号
学科分类号
摘要
Chinese medicine resources are the crystallization of traditional Chinese culture, and more and more people are choosing Chinese medicine services for their health. To address the problems of pluralistic heterogeneity, waste of service resources, and lagging demand response in the resource allocation model for traditional Chinese medicine (TCM) services, a deep reinforcement learning-based resource allocation method for TCM services is proposed. To address the fragmentation of TCM service resources, this paper presents a TCM service resource association method based on improved spectral clustering and establishes a good resource-demand matching model. For the problem of TCM service resource allocation after resource association, we establish a TCM service resource allocation model and collaboratively solve the TCM service resource allocation problem via the deep reinforcement learning method. The results show that the proposed solution can accelerate the demand response of TCM service resources, effectively reduce the cost of TCM services for patients, improve the quality of TCM services, and satisfy the demand for TCM services for patients.
引用
收藏
页码:1601 / 1616
页数:15
相关论文
共 50 条
  • [11] Fair Resource Allocation Based on Deep Reinforcement Learning in Fog Networks
    Xu, Huihui
    Zu, Yijun
    Shen, Fei
    Yan, Feng
    Qin, Fei
    Shen, Lianfeng
    AD HOC NETWORKS, ADHOCNETS 2019, 2019, 306 : 135 - 148
  • [12] Deep reinforcement learning based resource allocation algorithm in cellular networks
    Liao X.
    Yan S.
    Shi J.
    Tan Z.
    Zhao Z.
    Li Z.
    Tongxin Xuebao/Journal on Communications, 2019, 40 (02): : 11 - 18
  • [13] Resource Allocation Based on Deep Reinforcement Learning in IoT Edge Computing
    Xiong, Xiong
    Zheng, Kan
    Lei, Lei
    Hou, Lu
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (06) : 1133 - 1146
  • [14] UAV spatiotemporal crowdsourcing resource allocation based on deep reinforcement learning
    面向工业场景的无人机时空众包资源分配
    Huangfu, Wei (huangfuwei@ustb.edu.cn), 2025, 47 (01): : 91 - 100
  • [15] Image recognition of traditional Chinese medicine based on deep learning
    Miao, Junfeng
    Huang, Yanan
    Wang, Zhaoshun
    Wu, Zeqing
    Lv, Jianhui
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2023, 11
  • [16] RESOURCE ALLOCATION MANAGEMENT IN PATIENT-TO-PHYSICIAN COMMUNICATIONS BASED ON DEEP REINFORCEMENT LEARNING IN SMART HEALTHCARE SERVICES
    Alelaiwi, Abdulhameed
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2020,
  • [17] Deep Reinforcement Learning for Resource Allocation in Business Processes
    Zbikowski, Kamil
    Ostapowicz, Michal
    Gawrysiak, Piotr
    PROCESS MINING WORKSHOPS, ICPM 2022, 2023, 468 : 177 - 189
  • [18] Deep Reinforcement Learning for Resource Allocation in Massive MIMO
    Chen, Liang
    Sun, Fanglei
    Li, Kai
    Chen, Ruiqing
    Yang, Yang
    Wang, Jun
    29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021), 2021, : 1611 - 1615
  • [19] Edge Collaborative Task Scheduling and Resource Allocation Based on Deep Reinforcement Learning
    Chen, Tianjian
    Lyu, Zengwei
    Yuan, Xiaohui
    Wei, Zhenchun
    Shi, Lei
    Fan, Yuqi
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, PT III, 2022, 13473 : 598 - 606
  • [20] Deep Reinforcement Learning based Dynamic Resource Allocation Method for NOMA in AeroMACS
    Yu, Lanchenhui
    Zhao, Jingjing
    Zhu, Yanbo
    Chen, RunZe
    Cai, Kaiquan
    2024 INTEGRATED COMMUNICATIONS, NAVIGATION AND SURVEILLANCE CONFERENCE, ICNS, 2024,