Human-robot interaction-oriented video understanding of human actions

被引:1
|
作者
Wang, Bin [1 ]
Chang, Faliang [1 ]
Liu, Chunsheng [1 ]
Wang, Wenqian [1 ]
机构
[1] Shandong Univ, Sch Control Sci & Engn, 17923 Jingshi Rd, Jinan 250061, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Action recognition; Human-robot interaction; Temporal modeling; Contextual scene reasoning; NETWORK; FEATURES;
D O I
10.1016/j.engappai.2024.108247
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper focuses on action recognition tasks oriented to the field of human-robot interaction, which is one of the major challenges in the robotic video understanding field. Previous approaches focus on designing temporal models, lack the ability to capture motion information and build contextual correlation models. This may result in robots being unable to effectively understand long-term video actions. To solve these two problems, this paper propose a novel video understanding framework including: an Adaptive Temporal Sensitivity and Motion Capture Network (ATSMC-Net) and a contextual scene reasoning module called Knowledge Function Graph Module (KFG-Module). The proposed ATSMC-Net can adaptively adjust the frame-level and pixel-level sensitive regions of temporal features to effectively capture motion information. To fuse contextual scene information for cross-temporal inference, the KFG-Module is introduced to achieve fine-grained video understanding based on the relationship between objects and actions. We evaluate the method using three public video understanding benchmarks, including Something-Something-V1&V2 and HMDB51. In addition, we present a dataset with real-world application scenarios of human-robot interactions to verify the effectiveness of our approach on mobile robots. The experimental results show that the proposed method can significantly improve the video understanding of the robots.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Human-robot interaction and robot control
    Sequeira, Joao
    Ribeiro, Maria Isabel
    ROBOT MOTION AND CONTROL: RECENT DEVELOPMENTS, 2006, 335 : 375 - 390
  • [32] Agreeing to Interact: Understanding Interaction as Human-Robot Goal Conflicts
    Sasabuchi, Kazuhiro
    Ikeuchi, Katsushi
    Inaba, Masayuki
    COMPANION OF THE 2018 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'18), 2018, : 21 - 28
  • [33] Structured learning for spoken language understanding in human-robot interaction
    Bastianelli, Emanuele
    Castellucci, Giuseppe
    Croce, Danilo
    Basili, Roberto
    Nardi, Daniele
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (5-7): : 660 - 683
  • [34] Understanding human-robot interaction forces: a new mechanical solution
    Pippo, Irene
    Albanese, Giulia Aurora
    Zenzeri, Jacopo
    Torazza, Diego
    Berselli, Giovanni
    INTERNATIONAL JOURNAL OF INTERACTIVE DESIGN AND MANUFACTURING - IJIDEM, 2024, 18 (07): : 4765 - 4774
  • [35] Unsupervised Simultaneous Learning of Gestures, Actions and their Associations for Human-Robot Interaction
    Mohammad, Yasser
    Nishida, Toyoaki
    Okada, Shogo
    2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 2009, : 2537 - 2544
  • [36] Communication in Human-Robot Interaction
    Andrea Bonarini
    Current Robotics Reports, 2020, 1 (4): : 279 - 285
  • [37] Expressiveness in human-robot interaction
    Marti, Patrizia
    Giusti, Leonardo
    Pollini, Alessandro
    Rullo, Alessia
    INTERACTION DESIGN AND ARCHITECTURES, 2008, (5-6) : 93 - 98
  • [38] Natural Human-Robot Interaction
    Kanda, Takayuki
    SIMULATION, MODELING, AND PROGRAMMING FOR AUTONOMOUS ROBOTS, 2010, 6472 : 2 - 2
  • [39] On Interaction Quality in Human-Robot Interaction
    Bensch, Suna
    Jevtic, Aleksandar
    Hellstrom, Thomas
    ICAART: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 1, 2017, : 182 - 189
  • [40] Power in Human-Robot Interaction
    Hou, Yoyo Tsung-Yu
    Cheon, EunJeong
    Jung, Malte F.
    PROCEEDINGS OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024, 2024, : 269 - 282