Learning Spatio-Temporal Representation with Local and Global Diffusion

被引:107
|
作者
Qiu, Zhaofan [1 ]
Yao, Ting [2 ]
Ngo, Chong-Wah [3 ]
Tian, Xinmei [1 ]
Mei, Tao [2 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] JD AI Res, Beijing, Peoples R China
[3] City Univ Hong Kong, Kowloon, Hong Kong, Peoples R China
关键词
D O I
10.1109/CVPR.2019.01233
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for visual recognition problems. Nevertheless, the convolutional filters in these networks are local operations while ignoring the large-range dependency. Such drawback becomes even worse particularly for video recognition, since video is an information-intensive media with complex temporal variations. In this paper, we present a novel framework to boost the spatio-temporal representation learning by Local and Global Diffusion (LGD). Specifically, we construct a novel neural network architecture that learns the local and global representations in parallel. The architecture is composed of LGD blocks, where each block updates local and global features by modeling the diffusions between these two representations. Diffusions effectively interact two aspects of information, i.e., localized and holistic, for more powerful way of representation learning. Furthermore, a kernelized classifier is introduced to combine the representations from two aspects for video recognition. Our LGD networks achieve clear improvements on the large-scale Kinetics-400 and Kinetics-600 video classification datasets against the best competitors by 3.5% and 0.7%. We further examine the generalization of both the global and local representations produced by our pre-trained LGD networks on four different benchmarks for video action recognition and spatio-temporal action detection tasks. Superior performances over several state-of-the-art techniques on these benchmarks are reported.
引用
收藏
页码:12048 / 12057
页数:10
相关论文
共 50 条
  • [21] Visual representation of spatio-temporal structure
    Schill, K
    Zetzsche, C
    Brauer, W
    Eisenkolb, A
    Musto, A
    HUMAN VISION AND ELECTRONIC IMAGING III, 1998, 3299 : 128 - 138
  • [22] Deep Learning Model for Global Spatio-Temporal Image Prediction
    Nikezic, Dusan P.
    Ramadani, Uzahir R.
    Radivojevic, Dusan S.
    Lazovic, Ivan M.
    Mirkov, Nikola S.
    MATHEMATICS, 2022, 10 (18)
  • [23] Spatio-Temporal EEG Representation Learning on Riemannian Manifold and Euclidean Space
    Zhang, Guangyi
    Etemad, Ali
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (02): : 1469 - 1483
  • [24] Spatio-Temporal Representation Learning with Social Tie for Personalized POI Recommendation
    Dai, Shaojie
    Yu, Yanwei
    Fan, Hao
    Dong, Junyu
    DATA SCIENCE AND ENGINEERING, 2022, 7 (01) : 44 - 56
  • [25] Hierarchical Representation Learning based spatio-temporal data redundancy reduction
    Wang, Min
    Yang, Shuyuan
    Wu, Bin
    NEUROCOMPUTING, 2016, 173 : 298 - 305
  • [26] Spatio-Temporal Representation Learning with Social Tie for Personalized POI Recommendation
    Shaojie Dai
    Yanwei Yu
    Hao Fan
    Junyu Dong
    Data Science and Engineering, 2022, 7 : 44 - 56
  • [27] Urban mobility structure detection via spatio-temporal representation learning
    Duan, Xiaoqi
    Cehui Xuebao/Acta Geodaetica et Cartographica Sinica, 2024, 53 (08):
  • [28] Spatio-Temporal Consistency for Multivariate Time-Series Representation Learning
    Lee, Sangho
    Kim, Wonjoon
    Son, Youngdoo
    IEEE ACCESS, 2024, 12 : 30962 - 30975
  • [29] Personalized POI Recommendation: Spatio-Temporal Representation Learning with Social Tie
    Dai, Shaojie
    Yu, Yanwei
    Fan, Hao
    Dong, Junyu
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2021), PT I, 2021, 12681 : 558 - 574
  • [30] Learning Dynamic Graph Representation of Brain Connectome with Spatio-Temporal Attention
    Kim, Byung-Hoon
    Ye, Jong Chul
    Kim, Jae-Jin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34