DHL: Deep reinforcement learning-based approach for emergency supply distribution in humanitarian logistics

被引:0
|
作者
Junchao Fan
Xiaolin Chang
Jelena Mišić
Vojislav B. Mišić
Hongyue Kang
机构
[1] Beijing Jiaotong University,Beijing Key Laboratory of Security and Privacy in Intelligent Transportation
[2] Ryerson University,undefined
关键词
Deep reinforcement learning; Deep Q Network; Humanitarian logistics; Resource allocation; Emergency response;
D O I
暂无
中图分类号
学科分类号
摘要
Alleviating human suffering in disasters is one of the main objectives of humanitarian logistics. The lack of emergency rescue materials is the root cause of this suffering and must be considered when making emergency supply distribution decision. As large-scale disasters often cause varying degrees of damage to different influenced areas, which will cause differences in both human suffering and the demand for emergency supply in influenced areas. This paper considers a novel emergency supply distribution scenario in humanitarian logistics, which takes into account these differences. In the scenario, besides the economic goals such as minimizing costs, the humanitarian goal of alleviating the suffering of survivors is treated as one of the main bases of emergency supply distribution decision making. We first apply Markov Decision Process to establish the formulation of the emergency supply distribution problem. Then, to acquire the optimal resource allocation policy that can reduce the economic cost while decreasing the suffering of survivors, a Deep Q-Network-based approach for emergency supply distribution in Humanitarian Logistics (DHL) is developed. Numerical results demonstrate DHL has better performance and lower time complexity to solve the problem by comparing with other baselines.
引用
收藏
页码:2376 / 2389
页数:13
相关论文
共 50 条
  • [1] DHL: Deep reinforcement learning-based approach for emergency supply distribution in humanitarian logistics
    Fan, Junchao
    Chang, Xiaolin
    Misic, Jelena
    Misic, Vojislav B.
    Kang, Hongyue
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2022, 15 (05) : 2376 - 2389
  • [2] Deep Reinforcement Learning-Based Rescue Resource Distribution Scheduling of Storm Surge Inundation Emergency Logistics
    Wang, Yuewei
    Chen, Xiaodao
    Wang, Lizhe
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (10) : 10004 - 10013
  • [3] Reinforcement learning approach for resource allocation in humanitarian logistics
    Yu, Lina
    Zhang, Canrong
    Jiang, Jingyan
    Yang, Huasheng
    Shang, Huayan
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 173
  • [4] OPTIMIZING HUMANITARIAN LOGISTICS WITH DEEP REINFORCEMENT LEARNING AND DIGITAL TWINS
    Soykan, Bulent
    Rabadia, Ghaith
    2024 ANNUAL MODELING AND SIMULATION CONFERENCE, ANNSIM 2024, 2024,
  • [5] Distributional Deep Reinforcement Learning-Based Emergency Frequency Control
    Xie, Jian
    Sun, Wei
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2022, 37 (04) : 2720 - 2730
  • [6] A Deep Reinforcement Learning-Based Approach in Porker Game
    Kong, Yan
    Rui, Yefeng
    Hsia, Chih-Hsien
    Journal of Computers (Taiwan), 2023, 34 (02) : 41 - 51
  • [7] Computing on Wheels: A Deep Reinforcement Learning-Based Approach
    Kazmi, S. M. Ahsan
    Tai Manh Ho
    Tuong Tri Nguyen
    Fahim, Muhammad
    Khan, Adil
    Piran, Md Jalil
    Baye, Gaspard
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (11) : 22535 - 22548
  • [8] A deep reinforcement learning-based approach for the residential appliances scheduling
    Li, Sichen
    Cao, Di
    Huang, Qi
    Zhang, Zhenyuan
    Chen, Zhe
    Blaabjerg, Frede
    Hu, Weihao
    ENERGY REPORTS, 2022, 8 : 1034 - 1042
  • [9] A Deep Reinforcement Learning-Based Approach for Android GUI Testing
    Gao, Yuemeng
    Tao, Chuanqi
    Guo, Hongjing
    Gao, Jerry
    WEB AND BIG DATA, PT III, APWEB-WAIM 2022, 2023, 13423 : 262 - 276
  • [10] Real-time operation of distribution network: A deep reinforcement learning-based reconfiguration approach
    Bui, Van-Hai
    Su, Wencong
    SUSTAINABLE ENERGY TECHNOLOGIES AND ASSESSMENTS, 2022, 50