Learning Multimodal Representations for Sample-efficient Recognition of Human Actions

被引:0
|
作者
Vasco, Miguel [1 ,2 ]
Melo, Francisco S. [1 ,2 ]
de Matos, David Martins [1 ,2 ]
Paiva, Ana [1 ,2 ]
Inamura, Tetsunari [3 ,4 ]
机构
[1] Univ Lisbon, INESC ID, Lisbon, Portugal
[2] Univ Lisbon, Inst Super Tecn, Lisbon, Portugal
[3] SOKENDAI Grad Univ Adv Studies, Natl Inst Informat, Chiyoda Ku, 2-1-2 Hitotsubashi, Tokyo, Japan
[4] SOKENDAI Grad Univ Adv Studies, Dept Informat, Chiyoda Ku, 2-1-2 Hitotsubashi, Tokyo, Japan
关键词
D O I
10.1109/iros40897.2019.8967635
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans interact in rich and diverse ways with the environment. However, the representation of such behavior by artificial agents is often limited. In this work we present motion concepts, a novel multimodal representation of human actions in a household environment. A motion concept encompasses a probabilistic description of the kinematics of the action along with its contextual background, namely the location and the objects held during the performance. We introduce a novel algorithm which learns and recognizes motion concepts from action demonstrations, named Online Motion Concept Learning (OMCL). The algorithm is evaluated on a virtual-reality household environment with the presence of a human avatar. OMCL outperforms standard motion recognition algorithms on an one-shot recognition task, attesting to its potential for sample-efficient recognition of human actions.
引用
收藏
页码:4288 / 4293
页数:6
相关论文
共 50 条
  • [1] Online Motion Concept Learning: A Novel Algorithm for Sample-Efficient Learning and Recognition of Human Actions
    Vasco, Miguel
    Melo, Francisco
    de Matos, David Martins
    Paiva, Ana
    Inamura, Tetsunari
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 2244 - 2246
  • [2] Relative Entropy Regularized Sample-Efficient Reinforcement Learning With Continuous Actions
    Shang, Zhiwei
    Li, Renxing
    Zheng, Chunhua
    Li, Huiyun
    Cui, Yunduan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 475 - 485
  • [3] Relative Entropy Regularized Sample-Efficient Reinforcement Learning With Continuous Actions
    Shang, Zhiwei
    Li, Renxing
    Zheng, Chunhua
    Li, Huiyun
    Cui, Yunduan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 475 - 485
  • [4] Sample-Efficient Learning of Mixtures
    Ashtiani, Hassan
    Ben-David, Shai
    Mehrabian, Abbas
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 2679 - 2686
  • [5] Sample-Efficient Multimodal Dynamics Modeling for Risk-Sensitive Reinforcement Learning
    Yashima, Ryota
    Yamaguchi, Akihiko
    Hashimoto, Koichi
    2022 8TH INTERNATIONAL CONFERENCE ON MECHATRONICS AND ROBOTICS ENGINEERING (ICMRE 2022), 2022, : 21 - 27
  • [6] Sample-Efficient Multimodal Dynamics Modeling for Risk-Sensitive Reinforcement Learning
    Yashima, Ryota
    Yamaguchi, Akihiko
    Hashimoto, Koichi
    2022 8th International Conference on Mechatronics and Robotics Engineering, ICMRE 2022, 2022, : 21 - 27
  • [7] Sample-efficient Adversarial Imitation Learning
    Jung, Dahuin
    Lee, Hyungyu
    Yoon, Sungroh
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [8] Sample-efficient Adversarial Imitation Learning
    Jung, Dahuin
    Lee, Hyungyu
    Yoon, Sungroh
    Journal of Machine Learning Research, 2024, 25 : 1 - 32
  • [9] Sample-efficient Adversarial Imitation Learning
    Jung, Dahuin
    Lee, Hyungyu
    Yoon, Sungroh
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 32
  • [10] Sample-Efficient Neural Architecture Search by Learning Actions for Monte Carlo Tree Search
    Wang, Linnan
    Xie, Saining
    Li, Teng
    Fonseca, Rodrigo
    Tian, Yuandong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 5503 - 5515