BR-NPA: A non-parametric high-resolution attention model to improve the interpretability of attention

被引:0
|
作者
Gomez T. [1 ,3 ]
Ling S. [1 ,3 ]
Fréour T. [2 ,4 ,5 ,6 ]
Mouchère H. [1 ,3 ]
机构
[1] Christian Pauc Street, Nantes
[2] 63 Magellan Quai, Nantes
[3] Nantes University, CNRS, LS2N, CNRS UMR 6004, Nantes
[4] Nantes University Hospital, Department of Reproductive Medicine and Biology, Nantes
[5] Nantes University, Nantes University Hospital, Inserm, CRTI, Inserm UMR 1064, Nantes
[6] Nantes University, Nantes University Hospital, Inserm, CNRS, SFR Santé, Inserm UMS 016, CNRS UMS, 3556, Nantes
关键词
Deep learning; Interpretability; Non-parametric; Resolution; Spatial attention;
D O I
10.1016/j.patcog.2022.108927
中图分类号
TB18 [人体工程学]; Q98 [人类学];
学科分类号
030303 ; 1201 ;
摘要
The prevalence of employing attention mechanisms has brought along concerns about the interpretability of attention distributions. Although it provides insights into how a model is operating, utilizing attention as the explanation of model predictions is still highly dubious. The community is still seeking more interpretable strategies for better identifying local active regions that contribute the most to the final decision. To improve the interpretability of existing attention models, we propose a novel Bilinear Representative Non-Parametric Attention (BR-NPA) strategy that captures the task-relevant human-interpretable information. The target model is first distilled to have higher-resolution intermediate feature maps. From which, representative features are then grouped based on local pairwise feature similarity, to produce finer-grained, more precise attention maps highlighting task-relevant parts of the input. The obtained attention maps are ranked according to the activity level of the compound feature, which provides information regarding the important level of the highlighted regions. The proposed model can be easily adapted in a wide variety of modern deep models, where classification is involved. Extensive quantitative and qualitative experiments showcase more comprehensive and accurate visual explanations compared to state-of-the-art attention models and visualization methods across multiple tasks including fine-grained image classification, few-shot classification, and person re-identification, without compromising the classification accuracy. The proposed visualization model sheds imperative light on how neural networks ‘pay their attention’ differently in different tasks. © 2022
引用
收藏
相关论文
共 50 条
  • [31] VEDAM: Urban Vegetation Extraction Based on Deep Attention Model from High-Resolution Satellite Images
    Yang, Bin
    Zhao, Mengci
    Xing, Ying
    Zeng, Fuping
    Sun, Zhaoyang
    ELECTRONICS, 2023, 12 (05)
  • [32] An optimization high-resolution network for human pose recognition based on attention mechanism
    Jinlong Yang
    Yu Feng
    Multimedia Tools and Applications, 2024, 83 : 45535 - 45552
  • [33] A Deformable Attention Network for High-Resolution Remote Sensing Images Semantic Segmentation
    Zuo, Renxiang
    Zhang, Guangyun
    Zhang, Rongting
    Jia, Xiuping
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [34] HRAM-VITON: High-Resolution Virtual Try-On with Attention Mechanism
    Chen, Yue
    Liang, Xiaoman
    Lin, Mugang
    Zhang, Fachao
    Zhao, Huihuang
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (02): : 2753 - 2768
  • [35] A high-resolution feature difference attention network for the application of building change detection
    Wang, Xue
    Du, Junhan
    Tan, Kun
    Ding, Jianwei
    Liu, Zhaoxian
    Pan, Chen
    Han, Bo
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2022, 112
  • [36] High-Resolution Self-attention with Fair Loss for Point Cloud Segmentation
    Liu, Qiyuan
    Lu, Jinzheng
    Li, Qiang
    Huang, Bingsen
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT V, 2024, 14451 : 344 - 356
  • [37] A High-Resolution Velocity Inversion Method Based on Attention Convolutional Neural Network
    Li, Wenda
    Liu, Hong
    Wu, Tianqi
    Huo, Shoudong
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [38] EfficientViT: Lightweight Multi-Scale Attention for High-Resolution Dense Prediction
    Cai, Han
    Li, Junyan
    Hu, Muyan
    Gan, Chuang
    Han, Song
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17256 - 17267
  • [39] Feature Enhancement Attention for Road Extraction in High-Resolution Remote Sensing Image
    Yu, Hang
    Li, Chenyang
    Guo, Yuru
    Zhou, Suiping
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 19805 - 19816
  • [40] An optimization high-resolution network for human pose recognition based on attention mechanism
    Yang, Jinlong
    Feng, Yu
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (15) : 45535 - 45552