Neural integration of top-down spatial and feature-based information in visual search

被引:159
|
作者
Egner, Tobias [1 ]
Monti, Jim M. P. [1 ]
Trittschuh, Emily H. [1 ]
Wieneke, Christina A. [1 ]
Hirsch, Joy [2 ]
Mesulam, M. -Marsel [1 ]
机构
[1] Northwestern Univ, Cognit Neurol & Alzheimers Dis Ctr, Feinberg Sch Med, Chicago, IL 60611 USA
[2] Columbia Univ, Neurol Inst, Funct MRI Res Ctr, New York, NY 10032 USA
来源
JOURNAL OF NEUROSCIENCE | 2008年 / 28卷 / 24期
关键词
visual search; attention; spatial attention; feature-based attention; top-down salience map; oculomotor planning;
D O I
10.1523/JNEUROSCI.1262-08.2008
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Visual search is aided by previous knowledge regarding distinguishing features and probable locations of a sought-after target. However, how the human brain represents and integrates concurrent feature-based and spatial expectancies to guide visual search is currently not well understood. Specifically, it is not clear whether spatial and feature-based search information is initially represented in anatomically segregated regions, nor at which level of processing expectancies regarding target features and locations may be integrated. To address these questions, we independently and parametrically varied the degree of spatial and feature-based (color) cue information concerning the identity of an upcoming visual search target while recording blood oxygenation level-dependent (BOLD) responses in human subjects. Search performance improved with the amount of spatial and feature-based cue information, and cue-related BOLD responses showed that, during preparation for visual search, spatial and feature cue information were represented additively in shared frontal, parietal, and cingulate regions. These data show that representations of spatial and feature-based search information are integrated in source regions of top-down biasing and oculomotor planning before search onset. The purpose of this anticipatory integration could lie with the generation of a "top-down salience map," a search template of primed target locations and features. Our results show that this role may be served by the intraparietal sulcus, which additively integrated a spatially specific activation gain in relation to spatial cue information with a spatially global activation gain in relation to feature cue information.
引用
收藏
页码:6141 / 6151
页数:11
相关论文
共 50 条
  • [41] Neural representation of visual objects: encoding and top-down activation
    Miyashita, Y
    Hayashi, T
    CURRENT OPINION IN NEUROBIOLOGY, 2000, 10 (02) : 187 - 194
  • [42] UNSUPERVISED NEURAL NETWORK BASED FEATURE EXTRACTION USING WEAK TOP-DOWN CONSTRAINTS
    Kamper, Herman
    Elsner, Micha
    Jansen, Aren
    Goldwater, Sharon
    2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP), 2015, : 5818 - 5822
  • [43] Feature-based information integration for CAD/CAPP
    Duan, Xiaofeng
    Ning, Ruxin
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 1996, 2 (02): : 16 - 20
  • [44] On feature-based product structural information integration
    Chen, Xiao-hui
    Yu, Xin-lu
    Jixie Kexue Yu Jishu/Mechanical Science and Technology, 2000, 19 (02): : 317 - 318
  • [45] Real-world visual search is dominated by top-down guidance
    Chen, Xin
    Zelinsky, Gregory J.
    VISION RESEARCH, 2006, 46 (24) : 4118 - 4133
  • [46] Ironic capture: top-down expectations exacerbate distraction in visual search
    Huffman, Greg
    Rajsic, Jason
    Pratt, Jay
    PSYCHOLOGICAL RESEARCH-PSYCHOLOGISCHE FORSCHUNG, 2019, 83 (05): : 1070 - 1082
  • [48] THE ROLE OF TOP-DOWN TASK SET FOR ATTENTIONAL CAPTURE IN VISUAL SEARCH
    Kiss, Monika
    Eimer, Martin
    PSYCHOPHYSIOLOGY, 2009, 46 : S104 - S104
  • [49] Ironic capture: top-down expectations exacerbate distraction in visual search
    Greg Huffman
    Jason Rajsic
    Jay Pratt
    Psychological Research, 2019, 83 : 1070 - 1082
  • [50] Integration of visual and tactile stimuli: top-down influences require time
    Shore, DI
    Simic, N
    EXPERIMENTAL BRAIN RESEARCH, 2005, 166 (3-4) : 509 - 517