Human Intention Recognition using Markov Decision Processes

被引:0
|
作者
Lin, Hsien-I [1 ]
Chen, Wei-Kai [1 ]
机构
[1] Natl Taipei Univ Technol, Grad Inst Automat Technol, Taipei, Taiwan
关键词
Human intention recognition; human-robot interaction (HRI); Markov decision processes (MDPs); frequency-based reward function;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human intention recognition in human-robot interaction (HRI) has been a papular topic. This paper presents a human-intention recognition framework using Markov decision processes (MDPs). The framework is composed of the object and motion layers. The object and motion layers obtain the object information and human hand gestures, respectively. The information extracted from the both layers is used to represent the state in the MDPs. To learn human intention to accomplish tasks, a frequency-based reward function in the MDPs is proposed. It assists the MDPs to converge to the policy that corresponds to the frequency of the task that has been performed. In our experiments, four tasks that were trained in different numbers of trial of pouring water and making coffee were used to validate the proposed framework. With the frequency-based reward function, the plausible intentional actions in certain states were distinguishable from the ones using the default reward function.
引用
收藏
页码:340 / 343
页数:4
相关论文
共 50 条
  • [41] Ordinal Decision Models for Markov Decision Processes
    Weng, Paul
    20TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (ECAI 2012), 2012, 242 : 828 - 833
  • [42] Stability-constrained Markov Decision Processes using MPC
    Zanon, Mario
    Gros, Sebastien
    Palladino, Michele
    AUTOMATICA, 2022, 143
  • [43] SUBSTANTIATING THE OPTIMAL DISTRIBUTION POLICY USING MARKOV DECISION PROCESSES
    Basanu, Gheorghe
    Teleasa, Victor
    MANAGEMENT RESEARCH AND PRACTICE, 2010, 2 (04): : 362 - 370
  • [44] Using Linear Programming for Bayesian Exploration in Markov Decision Processes
    Castro, Pablo Samuel
    Precup, Doina
    20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2007, : 2436 - 2441
  • [45] Using Normative Markov Decision Processes for evaluating electronic contracts
    Fagundes, Moser Silva
    Ossowski, Sascha
    Luck, Michael
    Miles, Simon
    AI COMMUNICATIONS, 2012, 25 (01) : 1 - 17
  • [46] Optimistic planning in Markov decision processes using a generative model
    Szorenyi, Balazs
    Kedenburg, Gunnar
    Munos, Remi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [47] A Note on Infectious Disease Control using Markov Decision Processes
    Maeda Y.
    IEEJ Transactions on Electronics, Information and Systems, 2022, 142 (03) : 339 - 340
  • [48] MEDICAL TREATMENTS USING PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES
    Goulionis, John E.
    JP JOURNAL OF BIOSTATISTICS, 2009, 3 (02) : 77 - 97
  • [49] Ground Delay Program Planning Using Markov Decision Processes
    Cox, Jonathan
    Kochenderfer, Mykel J.
    JOURNAL OF AEROSPACE INFORMATION SYSTEMS, 2016, 13 (03): : 134 - 142
  • [50] A note on cultivation management with sensors using markov decision processes
    Maeda Y.
    IEEJ Transactions on Electronics, Information and Systems, 2021, 141 (03) : 400 - 401