UNSUPERVISED LEARNING FOR FORECASTING ACTION REPRESENTATIONS

被引:0
|
作者
Zhong, Yi [1 ]
Zheng, Wei-Shi [1 ,2 ]
机构
[1] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou, Guangdong, Peoples R China
[2] Minist Educ, Key Lab Machine Intelligence & Adv Comp, Beijing, Peoples R China
关键词
Unsupervised learning; temporal context; action forecasting;
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Most of previous works on future forecasting require a mass of videos with frame-level labels which would probably limit their application, since labelling video frame requires much tremendous efforts. In this paper, we present a unsupervised learning framework to anticipate the future representation by utilizing temporal historical information and train the anticipating capacity only using unlabelled videos. Compared to existing methods that predict the future representation from a static image, our proposed model presents a novel temporal context learning model for estimating the temporal evolution tendency by compacting outputs of all time steps in a LST-M. We evaluate the proposed model on two different activity datasets, TV Human Interaction dataset and THUMOS Validation and Test sets. We have demonstrated the effectiveness of our model in anticipating future representation task.
引用
收藏
页码:1073 / 1077
页数:5
相关论文
共 50 条
  • [41] DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations
    Giorgi, John
    Nitski, Osvald
    Wang, Bo
    Bader, Gary
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 879 - 895
  • [42] Time-Series Information and Unsupervised Learning of Representations
    Ryabko, Daniil
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2020, 66 (03) : 1702 - 1713
  • [43] Learning unsupervised contextual representations for medical synonym discovery
    Schumacher, Elliot
    Dredze, Mark
    JAMIA OPEN, 2019, 2 (04) : 538 - 546
  • [44] A sober look at the unsupervised learning of disentangled representations and their evaluation
    Locatello, Francesco
    Bauer, Stefan
    Lucic, Mario
    Rätsch, Gunnar
    Gelly, Sylvain
    Schölkopf, Bernhard
    Bachem, Olivier
    Journal of Machine Learning Research, 2020, 21
  • [45] Learning Unsupervised and Supervised Representations via General Covariance
    Yuan, Yun-Hao
    Li, Jin
    Li, Yun
    Gou, Jianping
    Qiang, Jipeng
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 145 - 149
  • [46] Unsupervised learning of a steerable basis for invariant image representations
    Bethge, Matthias
    Gerwinn, Sebastian
    Macke, Jakob H.
    HUMAN VISION AND ELECTRONIC IMAGING XII, 2007, 6492
  • [47] Multiple Kernel Sparse Representations for Supervised and Unsupervised Learning
    Thiagarajan, Jayaraman J.
    Ramamurthy, Karthikeyan Natesan
    Spanias, Andreas
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (07) : 2905 - 2915
  • [48] Unsupervised learning of mid-level visual representations
    Matteucci, Giulio
    Piasini, Eugenio
    Zoccolan, Davide
    CURRENT OPINION IN NEUROBIOLOGY, 2024, 84
  • [49] Spatial position constraint for unsupervised learning of speech representations
    Humayun, Mohammad Ali
    Yassin, Hayati
    Abas, Pg Emeroylariffion
    PEERJ COMPUTER SCIENCE, 2021, 7
  • [50] Independent Subspace Analysis for Unsupervised Learning of Disentangled Representations
    Stuhmer, Jan
    Turner, Richard E.
    Nowozin, Sebastian
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108