Learning A Joint Discriminative-Generative Model for Action Recognition

被引:0
|
作者
Alexiou, Ioannis [1 ]
Xiang, Tao [2 ]
Gong, Shaogang [1 ]
机构
[1] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London, England
[2] Vis Semant Ltd, London, England
来源
2015 INTERNATIONAL CONFERENCE ON SYSTEMS, SIGNALS AND IMAGE PROCESSING (IWSSIP 2015) | 2015年
关键词
Joint Learning; Discriminative-Generative Models; HMM; FKL;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
An action consists of a sequence of instantaneous motion patterns whose temporal ordering contains critical information especially for distinguishing fine-grained action categories. However, existing action recognition methods are dominated by discriminative classifiers such as kernel machines or metric learning with Bag-of-Words (BoW) action representations. They ignore the temporal structures of actions in exchange for robustness against noise. Although such temporal structures can be modelled explicitly using dynamic generative models such as Hidden Markov Models (HMMs), these generative models are designed to maximise the likelihood of the data therefore providing no guarantee on suitability for discrimination required by action recognition. In this work, a novel approach is proposed to explore the best of both worlds by discriminatively learning a generative action model. Specifically, our approach is based on discriminative Fisher kernel learning which learns a dynamic generative model so that the distance between the log-likelihood gradients induced by two actions of the same class is minimised. We demonstrate the advantages of the proposed model over the state-of-the-art action recognition methods using two challenging benchmark datasets of complex actions.
引用
收藏
页码:1 / 4
页数:4
相关论文
共 50 条
  • [41] A discriminative structural model for joint segmentation and recognition of human actions
    Cuiwei Liu
    Jingyi Hou
    Xinxiao Wu
    Yunde Jia
    Multimedia Tools and Applications, 2018, 77 : 31627 - 31645
  • [42] Generative versus discriminative methods for object recognition
    Ulusoy, I
    Bishop, CM
    2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 2, PROCEEDINGS, 2005, : 258 - 265
  • [43] A discriminative structural model for joint segmentation and recognition of human actions
    Liu, Cuiwei
    Hou, Jingyi
    Wu, Xinxiao
    Jia, Yunde
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (24) : 31627 - 31645
  • [44] Unifying generative and discriminative learning principles
    Jens Keilwagen
    Jan Grau
    Stefan Posch
    Marc Strickert
    Ivo Grosse
    BMC Bioinformatics, 11
  • [45] Generative-Discriminative Complementary Learning
    Xu, Yanwu
    Gong, Mingming
    Chen, Junxiang
    Liu, Tongliang
    Zhang, Kun
    Batmanghelich, Kayhan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 6526 - 6533
  • [46] Unifying generative and discriminative learning principles
    Keilwagen, Jens
    Grau, Jan
    Posch, Stefan
    Strickert, Marc
    Grosse, Ivo
    BMC BIOINFORMATICS, 2010, 11
  • [47] Action recognition by discriminative EdgeBoxes
    El-Masry, Mohammed
    Fakhr, Mohamed Waleed
    Salem, Mohammed A. -M.
    IET COMPUTER VISION, 2018, 12 (04) : 443 - 452
  • [48] Discriminative Action Recognition using Supervised Latent Topic Model
    Zou Huan-xin
    Sun Hao
    Ji Ke-feng
    DIGITAL MANUFACTURING & AUTOMATION III, PTS 1 AND 2, 2012, 190-191 : 1125 - 1128
  • [49] Joint Attribute and Model Generalization Learning for Privacy-Preserving Action Recognition
    Peng, Duo
    Xu, Li
    Ke, Qiuhong
    Hu, Ping
    Liu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] Joint model for learning state recognition with combining action detection and object detection
    Huang, Qiubo
    Liu, Zixuan
    Lu, Ting
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST 2022), 2022,