A Multimodal Path Planning Approach to Human Robot Interaction Based on Integrating Action Modeling

被引:3
|
作者
Kawasaki, Yosuke [1 ]
Yorozu, Ayanori [1 ]
Takahashi, Masaki [2 ]
Pagello, Enrico [3 ,4 ]
机构
[1] Keio Univ, Grad Sch Sci & Technol, Kohoku Ku, 3-14-1 Hiyoshi, Yokohama, Kanagawa 2238522, Japan
[2] Keio Univ, Dept Syst Design Engn, Kohoku Ku, 3-14-1 Hiyoshi, Yokohama, Kanagawa 2238522, Japan
[3] Univ Padua, Dept Informat Engn, Intelligent Autonomous Syst Lab, Padua, Italy
[4] IT Robot Srl, Vicenza, Italy
基金
日本科学技术振兴机构;
关键词
Robot navigation; Human-robot interaction; Action modeling; Multimodal path planning;
D O I
10.1007/s10846-020-01244-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To complete a task consisting of a series of actions that involve human-robot interaction, it is necessary to plan a motion that considers each action individually as well as in relation to the following action. We then focus on the specific action of "approaching a group of people" in order to accurately obtain human data that is used to make the performance of tasks involving interactions with multiple people more smooth. The movement depends on the characteristics of the important sensors used for the task and on the placement of people at and around the destination. Considering the multiple tasks and placement of people, the pre-calculation of the destinations and paths is difficult. This paper thus presents a system of navigation that can accurately obtain human data based on sensor characteristics, task content, and real-time sensor data for processes involving human-robot interaction (HRI); this method does not navigate specifically toward a previously determined static point. Our goal was achieved by using a multimodal path planning based on integration of action modeling by considering both voice and image sensing of interacting people as well as obstacle avoidance. We experimentally verified our method by using a robot in a coffee shop environment.
引用
收藏
页码:955 / 972
页数:18
相关论文
共 50 条
  • [31] Path planning based on improved A* and dynamic window approach for mobile robot
    Chen J.
    Xu L.
    Chen J.
    Liu Q.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2022, 28 (06): : 1650 - 1658
  • [32] A new approach for mobile robot path planning based on RRT algorithm
    Nguyen, Thanh-Hung
    Nguyen, Xuan-Thuan
    Pham, Duc-An
    Tran, Ba-Long
    Bui, Dinh-Ba
    MODERN PHYSICS LETTERS B, 2023, 37 (18):
  • [33] New Approach for Mobile Robot Path Planning
    Lin, Fengyun
    MATERIALS SCIENCE AND MECHANICAL ENGINEERING, 2014, 467 : 475 - 478
  • [34] Mobile robot path planning:: a multicriteria approach
    Fernandez, JA
    Gonzalez, J
    Mandow, L
    Pérez-de-la-Cruz, JL
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 1999, 12 (04) : 543 - 554
  • [35] Research on multimodal human-robot interaction based on speech and gesture
    Deng Yongda
    Li Fang
    Xin Huang
    COMPUTERS & ELECTRICAL ENGINEERING, 2018, 72 : 443 - 454
  • [36] A Gesture-based Multimodal Interface for Human-Robot Interaction
    Uimonen, Mikael
    Kemppi, Paul
    Hakanen, Taru
    2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 165 - 170
  • [37] Mobile robot path planning based on social interaction space in social environment
    Chen, Weihua
    Zhang, Tie
    Zou, Yanbiao
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2018, 15 (03):
  • [38] Path Planning of Continuum Robot Based on Path Fitting
    Niu, Guochen
    Zhang, Yunxiao
    Li, Wenshuai
    JOURNAL OF CONTROL SCIENCE AND ENGINEERING, 2020, 2020
  • [39] Multimodal Emotion Recognition for Human Robot Interaction
    Adiga, Sharvari
    Vaishnavi, D. V.
    Saxena, Suchitra
    ShikhaTripathi
    2020 7TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE (ISCMI 2020), 2020, : 197 - 203
  • [40] Multimodal Representation Learning for Human Robot Interaction
    Sheppard, Eli
    Lohan, Katrin S.
    HRI'20: COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2020, : 445 - 446