Ground Delay Program Planning Using Markov Decision Processes

被引:4
|
作者
Cox, Jonathan [1 ]
Kochenderfer, Mykel J. [1 ]
机构
[1] Stanford Univ, Dept Aeronaut & Astronaut, 496 Lomita Mall, Stanford, CA 94305 USA
来源
关键词
TRAFFIC FLOW MANAGEMENT; HOLDING PROBLEM;
D O I
10.2514/1.I010387
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
This paper compares three approaches for selecting planned airport acceptance rates in the single-airport ground-holding problem: the Ball et al. model, the Richetta-Odoni dynamic model, and an approach based on approximate dynamic programming. Selecting planned airport acceptance rates is motivated by current practice of ground delay program planning under collaborative decision making. The approaches were evaluated using real flight schedules and landing capacity data from Newark Liberty International and San Francisco International Airports. It is shown that planned airport acceptance rates can be determined from the decision variables of the Richetta-Odoni dynamic model. The approximate dynamic programming solution, introduced by the authors, is found by posing a model that evaluates planned airport acceptance as a Markov decision process. The dynamic Richetta-Odoni and approximate dynamic programming approaches were found to produce similar solutions, and both dominated the Ball et al. model. The Richetta-Odoni dynamic model is computationally more efficient, finding solutions 10 times faster than approximate dynamic programming, although the approximate dynamic programming approach can more easily incorporate complex objectives. Surprisingly, the performance of all three approaches did not change significantly when evaluated using different models for collaborative decision-making procedures. This observation suggests that modeling collaborative decision making may not be important for selecting near-optimal planned airport acceptance rates.
引用
收藏
页码:134 / 142
页数:9
相关论文
共 50 条
  • [31] Markov decision processes
    White, D.J.
    Journal of the Operational Research Society, 1995, 46 (06):
  • [32] Bounding reward measures of Markov models using the Markov decision processes
    Buchholz, Peter
    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, 2011, 18 (06) : 919 - 930
  • [33] Markov Decision Processes
    Bäuerle N.
    Rieder U.
    Jahresbericht der Deutschen Mathematiker-Vereinigung, 2010, 112 (4) : 217 - 243
  • [34] Driving force planning in shield tunneling based on Markov decision processes
    HU XiangTao HUANG YongAn YIN ZhouPing XIONG YouLun State Key Laboratory of Digital Manufacturing Equipment Technology Huazhong University of Science Technology Wuhan China
    Science China(Technological Sciences), 2012, 55 (04) : 1022 - 1030
  • [35] Unifying nondeterministic and probabilistic planning through imprecise Markov Decision Processes
    Trevizan, Felipe W.
    Cozman, Fabio G.
    de Barros, Leliane N.
    ADVANCES IN ARTIFICIAL INTELLIGENCE - IBERAMIA-SBIA 2006, PROCEEDINGS, 2006, 4140 : 502 - 511
  • [36] Driving force planning in shield tunneling based on Markov decision processes
    Hu XiangTao
    Huang YongAn
    Yin ZhouPing
    Xiong YouLun
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2012, 55 (04) : 1022 - 1030
  • [37] Driving force planning in shield tunneling based on Markov decision processes
    XiangTao Hu
    YongAn Huang
    ZhouPing Yin
    YouLun Xiong
    Science China Technological Sciences, 2012, 55 : 1022 - 1030
  • [38] Planning in Markov Decision Processes with Gap-Dependent Sample Complexity
    Jonsson, Anders
    Kaufmann, Emilie
    Menard, Pierre
    Domingues, Omar Darwiche
    Leurent, Edouard
    Valko, Michal
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [39] A Bayesian Approach for Learning and Planning in Partially Observable Markov Decision Processes
    Ross, Stephane
    Pineau, Joelle
    Chaib-draa, Brahim
    Kreitmann, Pierre
    JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 1729 - 1770
  • [40] Inspection and maintenance planning: an application of semi-Markov decision processes
    Universite de Technologie de Troyes, Troyes, France
    J Intell Manuf, 5 (467-476):