Do Humans Look Where Deep Convolutional Neural Networks "Attend"?

被引:5
|
作者
Ebrahimpour, Mohammad K. [1 ]
Ben Falandays, J. [2 ]
Spevack, Samuel [2 ]
Noelle, David C. [1 ,2 ]
机构
[1] Univ Calif, EECS, Merced, CA 95343 USA
[2] Univ Calif, Cognit & Informat Sci, Merced, CA USA
关键词
Visual spatial attention; Computer vision; Convolutional Neural Networks; Densely connected attention maps; Class Activation Maps; Sensitivity analysis;
D O I
10.1007/978-3-030-33723-0_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Convolutional Neural Networks (CNNs) have recently begun to exhibit human level performance on some visual perception tasks. Performance remains relatively poor, however, on some vision tasks, such as object detection: specifying the location and object class for all objects in a still image. We hypothesized that this gap in performance may be largely due to the fact that humans exhibit selective attention, while most object detection CNNs have no corresponding mechanism. In examining this question, we investigated some well-known attention mechanisms in the deep learning literature, identifying their weaknesses and leading us to propose a novel attention algorithm called the Densely Connected Attention Model. We then measured human spatial attention, in the form of eye tracking data, during the performance of an analogous object detection task. By comparing the learned representations produced by various CNN architectures with that exhibited by human viewers, we identified some relative strengths and weaknesses of the examined computational attention mechanisms. Some CNNs produced attentional patterns somewhat similar to those of humans. Others focused processing on objects in the foreground. Still other CNN attentional mechanisms produced usefully interpretable internal representations. The resulting comparisons provide insights into the relationship between CNN attention algorithms and the human visual system.
引用
收藏
页码:53 / 65
页数:13
相关论文
共 50 条
  • [21] Spatial deep convolutional neural networks
    Wang, Qi
    Parker, Paul A.
    Lund, Robert
    SPATIAL STATISTICS, 2025, 66
  • [22] Convergence of deep convolutional neural networks
    Xu, Yuesheng
    Zhang, Haizhang
    NEURAL NETWORKS, 2022, 153 : 553 - 563
  • [23] Fusion of Deep Convolutional Neural Networks
    Suchy, Robert
    Ezekiel, Soundararajan
    Cornacchia, Maria
    2017 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR), 2017,
  • [24] Generalisation in humans and deep neural networks
    Geirhos, Robert
    Temme, Carlos R. Medina
    Rauber, Jonas
    Schuett, Heiko H.
    Bethge, Matthias
    Wichmann, Felix A.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [25] Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?
    Das, Abhishek
    Agrawal, Harsh
    Zitnick, Larry
    Parikh, Devi
    Batra, Dhruv
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2017, 163 : 90 - 100
  • [26] Do deep convolutional neural networks really need to be deep when applied for remote scene classification?
    Luo, Chang
    Wang, Jie
    Feng, Gang
    Xu, Suhui
    Wang, Shiqiang
    JOURNAL OF APPLIED REMOTE SENSING, 2017, 11
  • [27] Comparing Object Recognition in Humans and Deep Convolutional Neural Networks-An Eye Tracking Study
    van Dyck, Leonard Elia
    Kwitt, Roland
    Denzler, Sebastian Jochen
    Gruber, Walter Roland
    FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [28] Plug and Play Deep Convolutional Neural Networks
    Neary, Patrick
    Allan, Vicki
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 388 - 395
  • [29] An Efficient Accelerator for Deep Convolutional Neural Networks
    Kuo, Yi-Xian
    Lai, Yeong-Kang
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TAIWAN), 2020,
  • [30] Elastography mapped by deep convolutional neural networks
    LIU DongXu
    KRUGGEL Frithjof
    SUN LiZhi
    Science China(Technological Sciences), 2021, (07) : 1567 - 1574