View-invariant recognition using corresponding object fragments

被引:0
|
作者
Bart, E [1 ]
Byvatov, E [1 ]
Ullman, S [1 ]
机构
[1] Weizmann Inst Sci, Dept Comp Sci & Appl Math, IL-76100 Rehovot, Israel
来源
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We develop a novel approach to view-invariant recognition and apply it to the task of recognizing face images under widely separated viewing: directions. Our main contribution is a novel object representation scheme using 'extended fragments' that enables us to achieve a high level of recognition performance and generalization across a wide range of viewing conditions. Extended fragments are equivalence classes of image fragments that represent informative object parts under different viewing conditions. They are extracted automatically from short video sequences during learning. Using this representation, the scheme is unique in its ability to generalize from a single view of a novel object and compensate for a significant change in viewing direction without using 3D information. As a result, novel objects can be recognized from viewing directions from which they were not seen in the past. Experiments demonstrate that the scheme achieves significantly better generalization and recognition performance than previously used methods.
引用
收藏
页码:152 / 165
页数:14
相关论文
共 50 条
  • [41] Dual-attention Network for View-invariant Action Recognition
    Kumie, Gedamu Alemu
    Habtie, Maregu Assefa
    Ayall, Tewodros Alemu
    Zhou, Changjun
    Liu, Huawen
    Seid, Abegaz Mohammed
    Erbad, Aiman
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (01) : 305 - 321
  • [42] View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds
    Fazl, Arash
    Grossberg, Stephen
    Mingolla, Ennio
    COGNITIVE PSYCHOLOGY, 2009, 58 (01) : 1 - 48
  • [43] Deeply Learned View-Invariant Features for Cross-View Action Recognition
    Kong, Yu
    Ding, Zhengming
    Li, Jun
    Fu, Yun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (06) : 3028 - 3037
  • [44] Learning View-invariant Sparse Representations for Cross-view Action Recognition
    Zheng, Jingjing
    Jiang, Zhuolin
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 3176 - 3183
  • [45] View-invariant object recognition ability develops after discrimination, not mere exposure, at several viewing angles
    Yamashita, Wakayo
    Wang, Gang
    Tanaka, Keiji
    EUROPEAN JOURNAL OF NEUROSCIENCE, 2010, 31 (02) : 327 - 335
  • [46] Contrastive Learning of View-invariant Representations for Facial Expressions Recognition
    Roy, Shuvendu
    Etemad, Ali
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (04)
  • [47] Attention Transfer (ANT) Network for View-invariant Action Recognition
    Ji, Yanli
    Xu, Feixiang
    Yang, Yang
    Xie, Ning
    Shen, Heng Tao
    Harada, Tatsuya
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 574 - 582
  • [48] Hierarchically Learned View-Invariant Representations for Cross-View Action Recognition
    Liu, Yang
    Lu, Zhaoyang
    Li, Jing
    Yang, Tao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (08) : 2416 - 2430
  • [49] Towards View-Invariant Intersection Recognition from Videos using Deep Network Ensembles
    Kumar, Abhijeet
    Gupta, Gunshi
    Sharma, Avinash
    Krishna, K. Madhava
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 1053 - 1060
  • [50] View-invariant gait recognition system using a gait energy image decomposition method
    Verlekar, Tanmay T.
    Correia, Paulo L.
    Soares, Luis D.
    IET BIOMETRICS, 2017, 6 (04) : 299 - 306