A feature-wise attention module based on the difference with surrounding features for convolutional neural networks

被引:9
|
作者
Tan, Shuo [1 ]
Zhang, Lei [1 ]
Shu, Xin [1 ]
Wang, Zizhou [1 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Machine Intelligence Lab, Chengdu 610065, Peoples R China
关键词
feature-wise attention; surround suppression; image classification; convolutional neural networks;
D O I
10.1007/s11704-022-2126-1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Attention mechanism has become a widely researched method to improve the performance of convolutional neural networks (CNNs). Most of the researches focus on designing channel-wise and spatial-wise attention modules but neglect the importance of unique information on each feature, which is critical for deciding both "what" and "where" to focus. In this paper, a feature-wise attention module is proposed, which can give each feature of the input feature map an attention weight. Specifically, the module is based on the well-known surround suppression in the discipline of neuroscience, and it consists of two sub-modules, Minus-Square-Add (MSA) operation and a group of learnable nonlinear mapping functions. The MSA imitates the surround suppression and defines an energy function which can be applied to each feature to measure its importance. The group of non-linear functions refines the energy calculated by the MSA to more reasonable values. By these two sub-modules, feature-wise attention can be well captured. Meanwhile, due to the simple structure and few parameters of the two sub-modules, the proposed module can easily be almost integrated into any CNN. To verify the performance and effectiveness of the proposed module, several experiments were conducted on the Cifar10, Cifar100, Cinic10, and Tiny-ImageNet datasets, respectively. The experimental results demonstrate that the proposed module is flexible and effective for CNNs to improve their performance.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] ULSAM: Ultra-Lightweight Subspace Attention Module for Compact Convolutional Neural Networks
    Saini, Rajat
    Jha, Nandan Kumar
    Das, Bedanta
    Mittal, Sparsh
    Mohan, C. Krishna
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 1616 - 1625
  • [42] SPAM: Spatially Partitioned Attention Module in Deep Convolutional Neural Networks for Image Classification
    Wang F.
    Qiao R.
    Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2023, 57 (09): : 185 - 192
  • [43] An efficient attention module for 3d convolutional neural networks in action recognition
    Guanghao Jiang
    Xiaoyan Jiang
    Zhijun Fang
    Shanshan Chen
    Applied Intelligence, 2021, 51 : 7043 - 7057
  • [44] Tensor decomposition based attention module for spiking neural networks
    Deng, Haoyu
    Zhu, Ruijie
    Qiu, Xuerui
    Duan, Yule
    Zhang, Malu
    Deng, Liang-Jian
    KNOWLEDGE-BASED SYSTEMS, 2024, 295
  • [45] Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation
    Sattarzadeh, Sam
    Sudhakar, Mahesh
    Lem, Anthony
    Mehryar, Shervin
    Plataniotis, Konstantinos N.
    Jang, Jongseong
    Kim, Hyunwoo
    Jeong, Yeonjeong
    Lee, Sangmin
    Bae, Kyunghoon
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 11639 - 11647
  • [46] Reparameterized attention for convolutional neural networks
    Wu, Yiming
    Li, Ruixiang
    Yu, Yunlong
    Li, Xi
    PATTERN RECOGNITION LETTERS, 2022, 164 : 89 - 95
  • [47] Attention-based Convolutional Neural Networks for Sentence Classification
    Zhao, Zhiwei
    Wu, Youzheng
    17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 705 - 709
  • [48] Fire Detection based on Convolutional Neural Networks with Channel Attention
    Zhang, Xiaobo
    Qian, Kun
    Jing, Kaihe
    Yang, Jianwei
    Yu, Hai
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 3080 - 3085
  • [49] Causal Discovery with Attention-Based Convolutional Neural Networks
    Nauta, Meike
    Bucur, Doina
    Seifert, Christin
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2019, 1 (01):
  • [50] Wavelet Based Edge Feature Enhancement for Convolutional Neural Networks
    De Silva, D. D. N.
    Fernando, S.
    Piyatilake, I. T. S.
    Karunarathne, A. V. S.
    ELEVENTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2018), 2019, 11041