Driving force planning in shield tunneling based on Markov decision processes

被引:0
|
作者
XiangTao Hu
YongAn Huang
ZhouPing Yin
YouLun Xiong
机构
[1] Huazhong University of Science & Technology,State Key Laboratory of Digital Manufacturing Equipment & Technology
来源
关键词
shield tunneling; Markov decision process; automatic deviation rectifying; interval arithmetic; driving force planning;
D O I
暂无
中图分类号
学科分类号
摘要
In shield tunneling, the control system needs very reliable capability of deviation rectifying in order to ensure that the tunnel trajectory meets the permissible criterion. To this goal, we present an approach that adopts Markov decision process (MDP) theory to plan the driving force with explicit representation of the uncertainty during excavation. The shield attitudes of possible world and driving forces during excavation are scattered as a state set and an action set, respectively. In particular, an evaluation function is proposed with consideration of the stability of driving force and the deviation of shield attitude. Unlike the deterministic approach, the driving forces based on MDP model lead to an uncertain effect and the attitude is known only with an imprecise probability. We consider the case that the transition probability varies in a given domain estimated by field data, and discuss the optimal policy based on the interval arithmetic. The validity of the approach is discussed by comparing the driving force planning with the actual operating data from the field records of Line 9 in Tianjin. It is proved that the MDP model is reasonable enough to predict the driving force for automatic deviation rectifying.
引用
收藏
页码:1022 / 1030
页数:8
相关论文
共 50 条
  • [11] Planning using hierarchical constrained Markov decision processes
    Seyedshams Feyzabadi
    Stefano Carpin
    Autonomous Robots, 2017, 41 : 1589 - 1607
  • [12] Planning using hierarchical constrained Markov decision processes
    Feyzabadi, Seyedshams
    Carpin, Stefano
    AUTONOMOUS ROBOTS, 2017, 41 (08) : 1589 - 1607
  • [13] Probabilistic Preference Planning Problem for Markov Decision Processes
    Li, Meilun
    Turrini, Andrea
    Hahn, Ernst Moritz
    She, Zhikun
    Zhang, Lijun
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2022, 48 (05) : 1545 - 1559
  • [14] Learning and Planning with Timing Information in Markov Decision Processes
    Bacon, Pierre-Luc
    Balle, Borja
    Precup, Doina
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2015, : 111 - 120
  • [15] Approximate planning and verification for large Markov decision processes
    Richard Lassaigne
    Sylvain Peyronnet
    International Journal on Software Tools for Technology Transfer, 2015, 17 : 457 - 467
  • [16] Planning in Discrete and Continuous Markov Decision Processes by Probabilistic Programming
    Nitti, Davide
    Belle, Vaishak
    de Raedt, Luc
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2015, PT II, 2015, 9285 : 327 - 342
  • [17] Online Planning for Large Markov Decision Processes with Hierarchical Decomposition
    Bai, Aijun
    Wu, Feng
    Chen, Xiaoping
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2015, 6 (04)
  • [18] Robust Adaptive Markov Decision Processes PLANNING WITH MODEL UNCERTAINTY
    Bertuccelli, Luca F.
    Wu, Albert
    How, Jonathan P.
    IEEE CONTROL SYSTEMS MAGAZINE, 2012, 32 (05): : 96 - 109
  • [19] Optimistic Planning for Belief-Augmented Markov Decision Processes
    Fonteneau, Raphael
    Busoniu, Lucian
    Munos, Remi
    PROCEEDINGS OF THE 2013 IEEE SYMPOSIUM ON ADAPTIVE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING (ADPRL), 2013, : 77 - 84
  • [20] Optimistic planning in Markov decision processes using a generative model
    Szorenyi, Balazs
    Kedenburg, Gunnar
    Munos, Remi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27