Pay Attention to the Activations: A Modular Attention Mechanism for Fine-Grained Image Recognition

被引:64
|
作者
Rodriguez, Pau [1 ]
Velazquez, Diego [2 ]
Cucurull, Guillem [1 ]
Gonfaus, Josep M. [3 ]
Roca, E. Xavier [2 ]
Gonzalez, Jordi [2 ]
机构
[1] Element AI, Montreal, PQ H2S 3G9, Canada
[2] Univ Autonoma Barcelona, Comp Vis Ctr, Bellaterra 08193, Spain
[3] Univ Autonoma Barcelona, Visual Tagging Serv, Parc Recerca, Bellaterra 08193, Spain
关键词
Computer architecture; Computational modeling; Image recognition; Task analysis; Proposals; Logic gates; Clutter; Image Retrieval Deep Learning Convolutional Neural Networks Attention-based Learning; VISUAL-ATTENTION; MODEL; AGE;
D O I
10.1109/TMM.2019.2928494
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fine-grained image recognition is central to many multimedia tasks such as search, retrieval, and captioning. Unfortunately, these tasks are still challenging since the appearance of samples of the same class can be more different than those from different classes. This issue is mainly due to changes in deformation, pose, and the presence of clutter. In the literature, attention has been one of the most successful strategies to handle the aforementioned problems. Attention has been typically implemented in neural networks by selecting the most informative regions of the image that improve classification. In contrast, in this paper, attention is not applied at the image level but to the convolutional feature activations. In essence, with our approach, the neural model learns to attend to lower-level feature activations without requiring part annotations and uses those activations to update and rectify the output likelihood distribution. The proposed mechanism is modular, architecture-independent, and efficient in terms of both parameters and computation required. Experiments demonstrate that well-known networks such as wide residual networks and ResNeXt, when augmented with our approach, systematically improve their classification accuracy and become more robust to changes in deformation and pose and to the presence of clutter. As a result, our proposal reaches state-of-the-art classification accuracies in CIFAR-10, the Adience gender recognition task, Stanford Dogs, and UEC-Food100 while obtaining competitive performance in ImageNet, CIFAR-100, CUB200 Birds, and Stanford Cars. In addition, we analyze the different components of our model, showing that the proposed attention modules succeed in finding the most discriminative regions of the image. Finally, as a proof of concept, we demonstrate that with only local predictions, an augmented neural network can successfully classify an image before reaching any fully connected layer, thus reducing the computational amount up to 10.
引用
收藏
页码:502 / 514
页数:13
相关论文
共 50 条
  • [41] Text to Image GANs with RoBERTa and Fine-grained Attention Networks
    Siddharth, M.
    Aarthi, R.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2021, 12 (12) : 947 - 955
  • [42] Bilinear Residual Attention Networks for Fine-Grained Image Classification
    Wang Yang
    Liu Libo
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (12)
  • [43] Subtler mixed attention network on fine-grained image classification
    Chao Liu
    Lei Huang
    Zhiqiang Wei
    Wenfeng Zhang
    Applied Intelligence, 2021, 51 : 7903 - 7916
  • [44] MASK GUIDED ATTENTION FOR FINE-GRAINED PATCHY IMAGE CLASSIFICATION
    Wang, Jun
    Yu, Xiaohan
    Gao, Yongsheng
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1044 - 1048
  • [45] A novel fine-grained rumor detection algorithm with attention mechanism
    Zhang, Ke
    Cao, Jianjun
    Pi, Dechang
    NEUROCOMPUTING, 2024, 583
  • [46] Attend and Rectify: A Gated Attention Mechanism for Fine-Grained Recovery
    Rodriguez, Pau
    Gonfaus, Josep M.
    Cucurull, Guillem
    Xavier Roca, F.
    Gonzalez, Jordi
    COMPUTER VISION - ECCV 2018, PT VIII, 2018, 11212 : 357 - 372
  • [47] Looking for the Devil in the Details: Learning Trilinear Attention Sampling Network for Fine-grained Image Recognition
    Zheng, Heliang
    Fu, Jianlong
    Zha, Zheng-Jun
    Luo, Jiebo
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5007 - 5016
  • [48] Multi-attention Meta Learning for Few-shot Fine-grained Image Recognition
    Zhu, Yaohui
    Liu, Chenlong
    Jiang, Shuqiang
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 1090 - 1096
  • [49] Dual Attention Networks for Few-Shot Fine-Grained Recognition
    Xu, Shu-Lin
    Zhang, Faen
    Wei, Xiu-Shen
    Wang, Jianhua
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2911 - 2919
  • [50] Weakly Supervised Fine-grained Recognition in a Segmentation-attention Network
    Yu, Nannan
    Zhang, Wenfeng
    Cai, Huanhuan
    ICMLC 2020: 2020 12TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, 2018, : 324 - 329