Action Keypoint Network for Efficient Video Recognition

被引:3
|
作者
Chen, Xu [1 ,2 ]
Han, Yahong [1 ,2 ,3 ]
Wang, Xiaohan [4 ]
Sun, Yifan [5 ]
Yang, Yi [4 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
[2] Tianjin Univ, Tianjin Key Lab Machine Learning, Tianjin 300072, Peoples R China
[3] Peng Cheng Lab, Shenzhen 518066, Peoples R China
[4] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310000, Peoples R China
[5] Baidu Res, Beijing 100000, Peoples R China
关键词
Video recognition; space-time interest points; deep learning; point cloud;
D O I
10.1109/TIP.2022.3191461
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reducing redundancy is crucial for improving the efficiency of video recognition models. An effective approach is to select informative content from the holistic video, yielding a popular family of dynamic video recognition methods. However, existing dynamic methods focus on either temporal or spatial selection independently while neglecting a reality that the redundancies are usually spatial and temporal, simultaneously. Moreover, their selected content is usually cropped with fixed shapes (e.g., temporally-cropped frames, spatially-cropped patches), while the realistic distribution of informative content can be much more diverse. With these two insights, this paper proposes to integrate temporal and spatial selection into an Action Keypoint Network (AK-Net). From different frames and positions, AK-Net selects some informative points scattered in arbitrary-shaped regions as a set of "action keypoints" and then transforms the video recognition into point cloud classification. More concretely, AK-Net has two steps, i.e., the keypoint selection and the point cloud classification. First, it inputs the video into a baseline network and outputs a feature map from an intermediate layer. We view each pixel on this feature map as a spatial-temporal point and select some informative keypoints using self-attention. Second, AK-Net devises a ranking criterion to arrange the keypoints into an ordered 1D sequence. Since the video is represented with a 1D sequence after the specified layer, AK-Net transforms the subsequent layers into a point cloud classification sub-net by compacting the original 2D convolutional kernels into 1D kernels. Consequentially, AK-Net brings two-fold benefits for efficiency: The keypoint selection step collects informative content within arbitrary shapes and increases the efficiency for modeling spatial-temporal dependencies, while the point cloud classification step further reduces the computational cost by compacting the convolutional kernels. Experimental results show that AK-Net can consistently improve the efficiency and performance of baseline methods on several video recognition benchmarks.
引用
收藏
页码:4980 / 4993
页数:14
相关论文
共 50 条
  • [41] Efficient spatio-temporal network for action recognition
    Su, Yanxiong
    Zhao, Qian
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2024, 21 (05)
  • [42] Differential motion attention network for efficient action recognition
    Liu, Caifeng
    Gu, Fangjie
    VISUAL COMPUTER, 2025, 41 (03): : 1719 - 1731
  • [43] Audio-Visual Glance Network for Efficient Video Recognition
    Nugroho, Muhammad Adi
    Woo, Sangmin
    Lee, Sumin
    Kim, Changick
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 10116 - 10125
  • [44] Motion keypoint trajectory and covariance descriptor for human action recognition
    Yun Yi
    Hanli Wang
    The Visual Computer, 2018, 34 : 391 - 403
  • [45] An Efficient Human Instance-Guided Framework for Video Action Recognition
    Lee, Inwoong
    Kim, Doyoung
    Wee, Dongyoon
    Lee, Sanghoon
    SENSORS, 2021, 21 (24)
  • [46] Dynamic Inference: A New Approach Toward Efficient Video Action Recognition
    Wu, Wenhao
    He, Dongliang
    Tan, Xiao
    Chen, Shifeng
    Yang, Yi
    Wen, Shilei
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 2890 - 2898
  • [47] Motion keypoint trajectory and covariance descriptor for human action recognition
    Yi, Yun
    Wang, Hanli
    VISUAL COMPUTER, 2018, 34 (03): : 391 - 403
  • [48] Temporal Shift Vision Transformer Adapter for Efficient Video Action Recognition
    Shi, Yaning
    Sun, Pu
    Gu, Bing
    Li, Longfei
    PROCEEDINGS OF 2024 4TH INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND INTELLIGENT COMPUTING, BIC 2024, 2024, : 42 - 46
  • [49] Rethinking Lightweight: Multiple Angle Strategy for Efficient Video Action Recognition
    Chen, Jianyu
    Wang, Zhongyuan
    Zeng, Kangli
    He, Zheng
    Xiong, Zixiang
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 498 - 502
  • [50] SCSampler: Sampling Salient Clips from Video for Efficient Action Recognition
    Korbar, Bruno
    Tran, Du
    Torresani, Lorenzo
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6241 - 6251