Disentangling top-down and bottom-up influences on blinks in the visual and auditory domain

被引:10
|
作者
Brych, Mareike [1 ]
Handel, Barbara [1 ]
机构
[1] Univ Wurzburg, Dept Psychol 3, Rontgenring 11, D-97070 Wurzburg, Germany
基金
欧洲研究理事会;
关键词
Eye blinks; Visual domain; Auditory domain; Attention; Oddball; MICROSACCADIC RESPONSES; EYE BLINKS; INTEGRATION; EYEBLINKS; ODDBALL; MEMORY;
D O I
10.1016/j.ijpsycho.2020.11.002
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Sensory input as well as cognitive factors can drive the modulation of blinking. Our aim was to dissociate sensory driven bottom-up from cognitive top-down influences on blinking behavior and compare these influences between the auditory and the visual domain. Using an oddball paradigm, we found a significant pre-stimulus decrease in blink probability for visual input compared to auditory input. Sensory input further led to an early post-stimulus blink increase in both modalities if a task demanded attention to the input. Only visual input caused a pronounced early increase without a task. In case of a target or the omission of a stimulus (as compared to standard input), an additional late increase in blink rate was found in the auditory and visual domain. This suggests that blink modulation must be based on the interpretation of the input, but does not need any sensory input at all to occur. Our results show a complex modulation of blinking based on top-down factors such as prediction and attention in addition to sensory-based influences. The magnitude of the modulation is mainly influenced by general attentional demands, while the latency of this modulation allows dissociating general from specific top-down influences that are independent of the sensory domain.
引用
收藏
页码:400 / 410
页数:11
相关论文
共 50 条
  • [31] Top-Down Versus Bottom-Up
    Nisbet, Euan
    Weiss, Ray
    SCIENCE, 2010, 328 (5983) : 1241 - 1243
  • [33] Compound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition
    Khadhouri, Bassam
    Demiris, Yiannis
    19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-05), 2005, : 1458 - 1463
  • [34] LINGUISTIC INFLUENCES ON BOTTOM-UP AND TOP-DOWN CLUSTERING FOR SPEAKER DIARIZATION
    Bozonnet, Simon
    Wang, Dong
    Evans, Nicholas
    Troncy, Raphael
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 4424 - 4427
  • [35] Efficient visual search without top-down or bottom-up guidance
    Wang, D
    Kristjansson, A
    Nakayama, K
    PERCEPTION & PSYCHOPHYSICS, 2005, 67 (02): : 239 - 253
  • [36] Integration of bottom-up and top-down processes in visual object recognition
    Schmid, A
    Eddy, M
    Holcomb, P
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2005, : 145 - 145
  • [37] The interaction of top-down and bottom-up attention in visual working memory
    Zheng, Weixi
    Sun, Yanchao
    Wu, Hehong
    Sun, Hongwei
    Zhang, Dexiang
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [38] Modeling the Interactions of Bottom-Up and Top-Down Guidance in Visual Attention
    Henderickx, David
    Maetens, Kathleen
    Geerinck, Thomas
    Soetens, Eric
    ATTENTION IN COGNITIVE SYSTEMS, 2009, 5395 : 197 - +
  • [39] Interactions of Top-Down and Bottom-Up Mechanisms in Human Visual Cortex
    McMains, Stephanie
    Kastner, Sabine
    JOURNAL OF NEUROSCIENCE, 2011, 31 (02): : 587 - 597
  • [40] Combining Top-down and Bottom-up Visual Saliency for Firearms Localization
    Ardizzone, Edoardo
    Gallea, Roberto
    La Cascia, Marco
    Mazzola, Giuseppe
    2014 INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND MULTIMEDIA APPLICATIONS (SIGMAP), 2014, : 25 - 32