Learning Multimodal Representations for Sample-efficient Recognition of Human Actions

被引:0
|
作者
Vasco, Miguel [1 ,2 ]
Melo, Francisco S. [1 ,2 ]
de Matos, David Martins [1 ,2 ]
Paiva, Ana [1 ,2 ]
Inamura, Tetsunari [3 ,4 ]
机构
[1] Univ Lisbon, INESC ID, Lisbon, Portugal
[2] Univ Lisbon, Inst Super Tecn, Lisbon, Portugal
[3] SOKENDAI Grad Univ Adv Studies, Natl Inst Informat, Chiyoda Ku, 2-1-2 Hitotsubashi, Tokyo, Japan
[4] SOKENDAI Grad Univ Adv Studies, Dept Informat, Chiyoda Ku, 2-1-2 Hitotsubashi, Tokyo, Japan
关键词
D O I
10.1109/iros40897.2019.8967635
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans interact in rich and diverse ways with the environment. However, the representation of such behavior by artificial agents is often limited. In this work we present motion concepts, a novel multimodal representation of human actions in a household environment. A motion concept encompasses a probabilistic description of the kinematics of the action along with its contextual background, namely the location and the objects held during the performance. We introduce a novel algorithm which learns and recognizes motion concepts from action demonstrations, named Online Motion Concept Learning (OMCL). The algorithm is evaluated on a virtual-reality household environment with the presence of a human avatar. OMCL outperforms standard motion recognition algorithms on an one-shot recognition task, attesting to its potential for sample-efficient recognition of human actions.
引用
收藏
页码:4288 / 4293
页数:6
相关论文
共 50 条
  • [41] Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
    Ma, Guozheng
    Zhang, Linrui
    Wang, Haoyu
    Li, Lu
    Wang, Zilin
    Wang, Zhen
    Shen, Li
    Wang, Xueqian
    Tao, Dacheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [42] Sample-efficient and occlusion-robust reinforcement learning for robotic manipulation via multimodal fusion dualization and representation normalization
    Noh, Samyeul
    Lee, Wooju
    Myung, Hyun
    NEURAL NETWORKS, 2025, 185
  • [43] A Mechanism for Sample-Efficient In-Context Learning for Sparse Retrieval Tasks
    Abernethy, Jacob
    Agarwal, Alekh
    Marinov, Teodor V.
    Warmuth, Manfred K.
    INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 237, 2024, 237
  • [44] TEXPLORE: real-time sample-efficient reinforcement learning for robots
    Hester, Todd
    Stone, Peter
    MACHINE LEARNING, 2013, 90 (03) : 385 - 429
  • [45] M2CURL: Sample-Efficient Multimodal Reinforcement Learning via Self-Supervised Representation Learning for Robotic Manipulation
    Lygerakis, Folios
    Dave, Vedant
    Rueckert, Flitiar
    2024 21ST INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS, UR 2024, 2024, : 490 - 497
  • [46] Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning
    Xie, Tengyang
    Jiang, Nan
    Wang, Huan
    Xiong, Caiming
    Bai, Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [47] TEXPLORE: real-time sample-efficient reinforcement learning for robots
    Todd Hester
    Peter Stone
    Machine Learning, 2013, 90 : 385 - 429
  • [48] Augmented Memory: Sample-Efficient Generative Molecular Design with Reinforcement Learning
    Guo, Jeff
    Schwaller, Philippe
    JACS AU, 2024, 4 (06): : 2160 - 2172
  • [49] Robust Humanoid Locomotion Using Trajectory Optimization and Sample-Efficient Learning
    Yeganegi, Mohammad Hasan
    Khadiv, Majid
    Moosavian, S. Ali A.
    Zhu, Jia-Jie
    Del Prete, Andrea
    Righetti, Ludovic
    2019 IEEE-RAS 19TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2019, : 170 - 177
  • [50] Sample-efficient model-based reinforcement learning for quantum control
    Khalid, Irtaza
    Weidner, Carrie A.
    Jonckheere, Edmond A.
    Schirmer, Sophie G.
    Langbein, Frank C.
    PHYSICAL REVIEW RESEARCH, 2023, 5 (04):