JS']JSE: Joint Semantic Encoder for zero-shot gesture learning

被引:1
|
作者
Madapana, Naveen [1 ]
Wachs, Juan [1 ]
机构
[1] Purdue Univ, Sch Ind Engn, W Lafayette, IN 47906 USA
基金
美国医疗保健研究与质量局;
关键词
Zero-shot learning; Gesture recognition; Feature selection; Transfer learning; ACTION RECOGNITION; VISUALIZATION; INTERFACE;
D O I
10.1007/s10044-021-00992-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Zero-shot learning (ZSL) is a transfer learning paradigm that aims to recognize unseen categories just by having a high-level description of them. While deep learning has greatly pushed the limits of ZSL for object classification, ZSL for gesture recognition (ZSGL) remains largely unexplored. Previous attempts to address ZSGL were focused on the creation of gesture attributes and algorithmic improvements, and there is little or no research concerned with feature selection for ZSGL. It is indisputable that deep learning has obviated the need for feature engineering for problems with large datasets. However, when the data are scarce, it is critical to leverage the domain information to create discriminative input features. The main goal of this work is to study the effect of three different feature extraction techniques (velocity, heuristical and latent features) on the performance of ZSGL. In addition, we propose a bilinear auto-encoder approach, referred to as Joint Semantic Encoder (JSE), for ZSGL that jointly minimizes the reconstruction, semantic and classification losses. We conducted extensive experiments to compare and contrast the feature extraction techniques and to evaluate the performance of JSE with respect to existing ZSL methods. For attribute-based classification scenario, irrespective of the feature type, results showed that JSE outperforms other approaches by 5% (p<0.01). When JSE is trained with heuristical features in across-category condition, we showed that JSE significantly outperforms other methods by 5% (p<0.01)).
引用
收藏
页码:679 / 692
页数:14
相关论文
共 50 条
  • [21] Attentive Semantic Preservation Network for Zero-Shot Learning
    Lu, Ziqian
    Yu, Yunlong
    Lu, Zhe-Ming
    Shen, Feng-Li
    Zhang, Zhongfei
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 2919 - 2925
  • [22] Zero-Shot Learning on Semantic Class Prototype Graph
    Fu, Zhenyong
    Xiang, Tao
    Kodirov, Elyor
    Gong, Shaogang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (08) : 2009 - 2022
  • [23] Semantic embeddings of generic objects for zero-shot learning
    Hascoet, Tristan
    Ariki, Yasuo
    Takiguchi, Tetsuya
    EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2019, 2019 (1)
  • [24] Prioritized Semantic Learning for Zero-Shot Instance Navigation
    Sun, Xinyu
    Liu, Lizhao
    Zhi, Hongyan
    Qiu, Ronghe
    Liang, Junwei
    COMPUTER VISION - ECCV 2024, PT XII, 2025, 15070 : 161 - 178
  • [25] Semantic Contrastive Embedding for Generalized Zero-Shot Learning
    Zongyan Han
    Zhenyong Fu
    Shuo Chen
    Jian Yang
    International Journal of Computer Vision, 2022, 130 : 2606 - 2622
  • [26] A study on zero-shot learning from semantic viewpoint
    P K Bhagat
    Prakash Choudhary
    Kh Manglem Singh
    The Visual Computer, 2023, 39 : 2149 - 2163
  • [27] ENCYCLOPEDIA ENHANCED SEMANTIC EMBEDDING FOR ZERO-SHOT LEARNING
    Jia, Zhen
    Zhang, Junge
    Huang, Kaiqi
    Tan, Tieniu
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 1287 - 1291
  • [28] Zero-Shot Classification with Discriminative Semantic Representation Learning
    Ye, Meng
    Guo, Yuhong
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5103 - 5111
  • [29] Zero-Shot Learning via Semantic Similarity Embedding
    Zhang, Ziming
    Saligrama, Venkatesh
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 4166 - 4174
  • [30] A meaningful learning method for zero-shot semantic segmentation
    Xianglong LIU
    Shihao BAI
    Shan AN
    Shuo WANG
    Wei LIU
    Xiaowei ZHAO
    Yuqing MA
    ScienceChina(InformationSciences), 2023, 66 (11) : 35 - 53