InfoGCN: Representation Learning for Human Skeleton-based Action Recognition

被引:181
|
作者
Chi, Hyung-gun [1 ]
Ha, Myoung Hoon [2 ]
Chi, Seunggeun [1 ]
Lee, Sang Wan [2 ]
Huang, Qixing [3 ]
Ramani, Karthik [1 ,4 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
[2] Korea Adv Inst Sci & Technol, Daejeon, South Korea
[3] Univ Texas Austin, Austin, TX 78712 USA
[4] Purdue Univ, Sch Mech Engn, W Lafayette, IN 47907 USA
基金
新加坡国家研究基金会; 美国国家科学基金会;
关键词
D O I
10.1109/CVPR52688.2022.01955
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human skeleton-based action recognition offers a valuable means to understand the intricacies of human behavior because it can handle the complex relationships between physical constraints and intention. Although several studies have focused on encoding a skeleton, less attention has been paid to embed this information into the latent representations of human action. InfoGCN proposes a learning framework for action recognition combining a novel learning objective and an encoding method. First, we design an information bottleneck-based learning objective to guide the model to learn informative but compact latent representations. To provide discriminative information for classifying action, we introduce attention-based graph convolution that captures the context-dependent intrinsic topology of human action. In addition, we present a multi-modal representation of the skeleton using the relative position of joints, designed to provide complementary spatial information for joints. InfoGCN(1) surpasses the known state-of-the-art on multiple skeleton-based action recognition benchmarks with the accuracy of 93.0% on NTU RGB+D 60 cross-subject split, 89.8% on NTU RGB+D 120 cross-subject split, and 97.0% on NW-UCLA.
引用
收藏
页码:20154 / 20164
页数:11
相关论文
共 50 条
  • [21] Representation modeling learning with multi-domain decoupling for unsupervised skeleton-based action recognition
    He, Zhiquan
    Lv, Jiantu
    Fang, Shizhang
    NEUROCOMPUTING, 2024, 582
  • [22] Deep Progressive Reinforcement Learning for Skeleton-based Action Recognition
    Tang, Yansong
    Tian, Yi
    Lu, Jiwen
    Li, Peiyang
    Zhou, Jie
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 5323 - 5332
  • [23] A Cross View Learning Approach for Skeleton-Based Action Recognition
    Zheng, Hui
    Zhang, Xinming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (05) : 3061 - 3072
  • [24] Deep Learning Techniques for Skeleton-Based Action Recognition: A Survey
    Pham, Dinh-Tan
    COMPUTATIONAL SCIENCE AND ITS APPLICATIONS-ICCSA 2024, PT II, 2024, 14814 : 427 - 435
  • [25] Deep Learning on Lie Groups for Skeleton-based Action Recognition
    Huang, Zhiwu
    Wan, Chengde
    Probst, Thomas
    Van Gool, Luc
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1243 - 1252
  • [26] Progressive semantic learning for unsupervised skeleton-based action recognition
    Qin, Hao
    Chen, Luyuan
    Kong, Ming
    Zhao, Zhuoran
    Zeng, Xianzhou
    Lu, Mengxu
    Zhu, Qiang
    MACHINE LEARNING, 2025, 114 (03)
  • [27] A Short Survey on Deep Learning for Skeleton-based Action Recognition
    Wang, Wei
    Zhang, Yu-Dong
    COMPANION PROCEEDINGS OF THE 14TH IEEE/ACM INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING (UCC'21 COMPANION), 2021,
  • [28] SkelResNet: Transfer Learning Approach for Skeleton-Based Action Recognition
    Kilic, Ugur
    Karadag, Ozge Oztimur
    Ozyer, Gulsah Tumuklu
    32ND IEEE SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU 2024, 2024,
  • [29] JointContrast: Skeleton-Based Mutual Action Recognition with Contrastive Learning
    Jia, Xiangze
    Zhang, Ji
    Wang, Zhen
    Luo, Yonglong
    Chen, Fulong
    Xiao, Jing
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2022, 13631 : 478 - 489
  • [30] Revisiting Skeleton-based Action Recognition
    Duan, Haodong
    Zhao, Yue
    Chen, Kai
    Lin, Dahua
    Dai, Bo
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 2959 - 2968