Pay Attention to the Activations: A Modular Attention Mechanism for Fine-Grained Image Recognition

被引:64
|
作者
Rodriguez, Pau [1 ]
Velazquez, Diego [2 ]
Cucurull, Guillem [1 ]
Gonfaus, Josep M. [3 ]
Roca, E. Xavier [2 ]
Gonzalez, Jordi [2 ]
机构
[1] Element AI, Montreal, PQ H2S 3G9, Canada
[2] Univ Autonoma Barcelona, Comp Vis Ctr, Bellaterra 08193, Spain
[3] Univ Autonoma Barcelona, Visual Tagging Serv, Parc Recerca, Bellaterra 08193, Spain
关键词
Computer architecture; Computational modeling; Image recognition; Task analysis; Proposals; Logic gates; Clutter; Image Retrieval Deep Learning Convolutional Neural Networks Attention-based Learning; VISUAL-ATTENTION; MODEL; AGE;
D O I
10.1109/TMM.2019.2928494
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fine-grained image recognition is central to many multimedia tasks such as search, retrieval, and captioning. Unfortunately, these tasks are still challenging since the appearance of samples of the same class can be more different than those from different classes. This issue is mainly due to changes in deformation, pose, and the presence of clutter. In the literature, attention has been one of the most successful strategies to handle the aforementioned problems. Attention has been typically implemented in neural networks by selecting the most informative regions of the image that improve classification. In contrast, in this paper, attention is not applied at the image level but to the convolutional feature activations. In essence, with our approach, the neural model learns to attend to lower-level feature activations without requiring part annotations and uses those activations to update and rectify the output likelihood distribution. The proposed mechanism is modular, architecture-independent, and efficient in terms of both parameters and computation required. Experiments demonstrate that well-known networks such as wide residual networks and ResNeXt, when augmented with our approach, systematically improve their classification accuracy and become more robust to changes in deformation and pose and to the presence of clutter. As a result, our proposal reaches state-of-the-art classification accuracies in CIFAR-10, the Adience gender recognition task, Stanford Dogs, and UEC-Food100 while obtaining competitive performance in ImageNet, CIFAR-100, CUB200 Birds, and Stanford Cars. In addition, we analyze the different components of our model, showing that the proposed attention modules succeed in finding the most discriminative regions of the image. Finally, as a proof of concept, we demonstrate that with only local predictions, an augmented neural network can successfully classify an image before reaching any fully connected layer, thus reducing the computational amount up to 10.
引用
收藏
页码:502 / 514
页数:13
相关论文
共 50 条
  • [21] Adversarial erasing attention for fine-grained image classification
    Jinsheng Ji
    Linfeng Jiang
    Tao Zhang
    Weilin Zhong
    Huilin Xiong
    Multimedia Tools and Applications, 2021, 80 : 22867 - 22889
  • [22] Aggregate attention module for fine-grained image classification
    Xingmei Wang
    Jiahao Shi
    Hamido Fujita
    Yilin Zhao
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 : 8335 - 8345
  • [23] Adversarial erasing attention for fine-grained image classification
    Ji, Jinsheng
    Jiang, Linfeng
    Zhang, Tao
    Zhong, Weilin
    Xiong, Huilin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (15) : 22867 - 22889
  • [24] Mixed Attention Mechanism for Small-Sample Fine-grained Image Classification
    Li, Xiaoxu
    Wu, Jijie
    Chang, Dongliang
    Huang, Weifeng
    Ma, Zhanyu
    Cao, Jie
    2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2019, : 80 - 85
  • [25] Aggregate attention module for fine-grained image classification
    Wang, Xingmei
    Shi, Jiahao
    Fujita, Hamido
    Zhao, Yilin
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 14 (7) : 8335 - 8345
  • [26] Fine-grained attention mechanism for neural machine translation
    Choi, Heeyoul
    Cho, Kyunghyun
    Bengio, Yoshua
    NEUROCOMPUTING, 2018, 284 : 171 - 176
  • [27] Multi-Attention Multi-Class Constraint for Fine-grained Image Recognition
    Sun, Ming
    Yuan, Yuchen
    Zhou, Feng
    Ding, Errui
    COMPUTER VISION - ECCV 2018, PT XVI, 2018, 11220 : 834 - 850
  • [28] Learning Rich Part Hierarchies With Progressive Attention Networks for Fine-Grained Image Recognition
    Zheng, Heliang
    Fu, Jianlong
    Zha, Zheng-Jun
    Luo, Jiebo
    Mei, Tao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 476 - 488
  • [29] Learning Scale-Consistent Attention Part Network for Fine-Grained Image Recognition
    Liu, Huabin
    Li, Jianguo
    Li, Dian
    See, John
    Lin, Weiyao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 2902 - 2913
  • [30] Two-Level Progressive Attention Convolutional Network for Fine-Grained Image Recognition
    Wei, Hua
    Zhu, Ming
    Wang, Bo
    Wang, Jiarong
    Sun, Deyao
    IEEE ACCESS, 2020, 8 : 104985 - 104995