Visual Attention with Deep Neural Networks

被引:0
|
作者
Canziani, Alfredo [1 ]
Culurciello, Eugenio [1 ]
机构
[1] Purdue Univ, Weldon Sch Biomed Engn, W Lafayette, IN 47907 USA
关键词
MODEL; SALIENCY;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Animals use attentional mechanisms for being able to process enormous amount of sensory input in real time. Analogously, computerised systems could take advantage of similar techniques for achieving better timing performance. Visual attentional control uses bottom-up and top-down saliency maps for establishing the most relevant locations to observe. This article presents a novel fully-learnt unbiassed biologically plausible algorithm for computing both feature based and proto-object saliency maps, using a deep convolutional neural network simply trained on a single-class classification task, by unveiling its internal attentional apparatus. We are able to process 2 megapixels (MPs) colour images in real-time, i.e. at more than 10 frames per second, producing a 2MP map of interest.
引用
收藏
页数:3
相关论文
共 50 条
  • [21] Simulation of Visual Attention Using Hierarchical Spiking Neural Networks
    Wu, QingXiang
    McGinnity, T. Martin
    Maguire, Liam
    Cai, Rongtai
    Chen, Meigui
    BIO-INSPIRED COMPUTING AND APPLICATIONS, 2012, 6840 : 26 - +
  • [22] Dropping Activations in Convolutional Neural Networks with Visual Attention Maps
    Montoya Obeso, Abraham
    Benois-Pineau, Jenny
    Garcia Vazquez, Mireya Sarai
    Acosta, Alejandro A. Ramirez
    2019 INTERNATIONAL CONFERENCE ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2019,
  • [23] Neural networks based visual attention model for surveillance videos
    Guraya, Fahad Fazal Elahi
    Cheikh, Faouzi Alaya
    NEUROCOMPUTING, 2015, 149 : 1348 - 1359
  • [24] A visual attention model based on hierarchical spiking neural networks
    Wu, QingXiang
    McGinnity, T. M.
    Maguire, Liam
    Cai, Rongtai
    Chen, Meigui
    NEUROCOMPUTING, 2013, 116 : 3 - 12
  • [25] Factorized visual representations in the primate visual system and deep neural networks
    Lindsey, Jack W.
    Issa, Elias B.
    ELIFE, 2024, 13
  • [26] COMPRESSING DEEP NEURAL NETWORKS FOR EFFICIENT VISUAL INFERENCE
    Ge, Shiming
    Luo, Zhao
    Zhao, Shengwei
    Jin, Xin
    Zhang, Xiao-Yu
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2017, : 667 - 672
  • [27] Visual Emotion Recognition Using Deep Neural Networks
    Iliev, Alexander I.
    Mote, Ameya
    DIGITAL PRESENTATION AND PRESERVATION OF CULTURAL AND SCIENTIFIC HERITAGE, 2022, 12 : 77 - 88
  • [28] Visual number sense in untrained deep neural networks
    Kim, Gwangsu
    Jang, Jaeson
    Baek, Seungdae
    Song, Min
    Paik, Se-Bum
    SCIENCE ADVANCES, 2021, 7 (01):
  • [29] Colour Visual Coding in trained Deep Neural Networks
    Rafegas, Ivet
    Vanrell, Maria
    PERCEPTION, 2016, 45 : 214 - 214
  • [30] Deep Neural Networks for Modeling Visual Perceptual Learning
    Wenliang, Li K.
    Seitz, Aaron R.
    JOURNAL OF NEUROSCIENCE, 2018, 38 (27): : 6028 - 6044