Driving force planning in shield tunneling based on Markov decision processes

被引:0
|
作者
HU XiangTao
机构
关键词
shield tunneling; Markov decision process; automatic deviation rectifying; interval arithmetic; driving force planning;
D O I
暂无
中图分类号
U455.43 [盾构法(全断面开挖)];
学科分类号
摘要
In shield tunneling, the control system needs very reliable capability of deviation rectifying in order to ensure that the tunnel trajectory meets the permissible criterion. To this goal, we present an approach that adopts Markov decision process (MDP) theory to plan the driving force with explicit representation of the uncertainty during excavation. The shield attitudes of possi- ble world and driving forces during excavation are scattered as a state set and an action set, respectively. In particular, an evaluation function is proposed with consideration of the stability of driving force and the deviation of shield attitude. Unlike the deterministic approach, the driving forces based on MDP model lead to an uncertain effect and the attitude is known only with an imprecise probability. We consider the case that the transition probability varies in a given domain estimated by field data, and discuss the optimal policy based on the interval arithmetic. The validity of the approach is discussed by comparing the driving force planning with the actual operating data from the field records of Line 9 in Tianjin. It is proved that the MDP model is reasonable enough to predict the driving force for automatic deviation rectifying.
引用
收藏
页码:1022 / 1030
页数:9
相关论文
共 50 条
  • [41] A planning system based on Markov decision processes to guide people with dementia through activities of daily living
    Boger, J
    Hoey, J
    Poupart, P
    Boutilier, C
    Fernie, G
    Mihailidis, A
    IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, 2006, 10 (02): : 323 - 333
  • [42] Wind-Energy based Path Planning For Unmanned Aerial Vehicles Using Markov Decision Processes
    Al-Sabban, Wesam H.
    Gonzalez, Luis F.
    Smith, Ryan N.
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 784 - 789
  • [43] Distribution-based objectives for Markov Decision Processes
    Akshay, S.
    Genest, Blaise
    Vyas, Nikhil
    LICS'18: PROCEEDINGS OF THE 33RD ANNUAL ACM/IEEE SYMPOSIUM ON LOGIC IN COMPUTER SCIENCE, 2018, : 36 - 45
  • [44] A distributed search system based on Markov Decision Processes
    Shen, YP
    Lee, DL
    Zhang, LW
    INTERNET APPLICATIONS, 1999, 1749 : 73 - 82
  • [45] Game-based abstraction for Markov decision processes
    Kwiatkowska, Marta
    Norman, Gethin
    Parker, David
    QEST 2006: THIRD INTERNATIONAL CONFERENCE ON THE QUANTITATIVE EVALUATION OF SYSTEMS, 2006, : 157 - +
  • [46] Prioritized goal decomposition of Markov decision processes: Toward a synthesis of classical and decision theoretic planning
    Boutilier, C
    Brafman, RI
    Geib, C
    IJCAI-97 - PROCEEDINGS OF THE FIFTEENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOLS 1 AND 2, 1997, : 1156 - 1162
  • [47] A reinforcement learning based algorithm for Markov decision processes
    Bhatnagar, S
    Kumar, S
    2005 International Conference on Intelligent Sensing and Information Processing, Proceedings, 2005, : 199 - 204
  • [48] FINITE STATE CONTINUOUS TIME MARKOV DECISION PROCESSES WITH A FINITE PLANNING HORIZON
    MILLER, BL
    SIAM JOURNAL ON CONTROL, 1968, 6 (02): : 266 - &
  • [49] Robust path planning for flexible needle insertion using Markov decision processes
    Xiaoyu Tan
    Pengqian Yu
    Kah-Bin Lim
    Chee-Kong Chui
    International Journal of Computer Assisted Radiology and Surgery, 2018, 13 : 1439 - 1451
  • [50] Planning treatment of ischemic heart disease with partially observable Markov decision processes
    Hauskrecht, M
    Fraser, H
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2000, 18 (03) : 221 - 244