Cross-modal links in spatial attention

被引:214
|
作者
Driver, J
Spence, C
机构
[1] UCL, Dept Psychol, Inst Cognit Neurosci, London WC1E 6BT, England
[2] Univ Oxford, Dept Expt Psychol, Oxford OX1 3LD, England
基金
英国惠康基金;
关键词
attention; cross-modal; touch; audition; proprioception; vision;
D O I
10.1098/rstb.1998.0286
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible cross-modal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these cross-modal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way Crossmodal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive cross-modal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.
引用
收藏
页码:1319 / 1331
页数:13
相关论文
共 50 条
  • [41] Utilizing visual attention for cross-modal coreference interpretation
    Byron, D
    Mampilly, T
    Sharma, V
    Xu, TF
    MODELING AND USING CONTEXT, PROCEEDINGS, 2005, 3554 : 83 - 96
  • [42] Cross-Modal Graph Attention Network for Entity Alignment
    Xu, Baogui
    Xu, Chengjin
    Su, Bing
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3715 - 3723
  • [43] Weakly Supervised Hashing with Reconstructive Cross-modal Attention
    Du, Yongchao
    Wang, Min
    Lu, Zhenbo
    Zhou, Wengang
    Li, Houqiang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (06)
  • [44] Cross-modal recipe retrieval with stacked attention model
    Chen, Jing-Jing
    Pang, Lei
    Ngo, Chong-Wah
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (22) : 29457 - 29473
  • [45] CROSS-MODAL NEGATIVE PRIMING AND INTERFERENCE IN SELECTIVE ATTENTION
    DRIVER, J
    BAYLIS, GC
    BULLETIN OF THE PSYCHONOMIC SOCIETY, 1993, 31 (01) : 45 - 48
  • [46] Cross-Modal Attention and Sensory Discrimination Thresholds in Autism
    Haigh, Sarah
    Heeger, David
    Heller, Laurie
    Gupta, Akshat
    Dinstein, Ilan
    Minshew, Nancy
    Behrmann, Marlene
    PERCEPTION, 2016, 45 (06) : 693 - 693
  • [47] Cross-Modal Attention for MRI and Ultrasound Volume Registration
    Song, Xinrui
    Guo, Hengtao
    Xu, Xuanang
    Chao, Hanqing
    Xu, Sheng
    Turkbey, Baris
    Wood, Bradford J.
    Wang, Ge
    Yan, Pingkun
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT IV, 2021, 12904 : 66 - 75
  • [48] CM-SC: Cross-modal spatial-channel attention network for image captioning
    Hossain, Md. Shamim
    Aktar, Shamima
    Hossain, Mohammad Alamgir
    Gu, Naijie
    Huang, Zhangjin
    DISPLAYS, 2025, 87
  • [49] Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction
    Wang, Qi
    Wen, Zhijie
    Shi, Jun
    Wang, Qian
    Shen, Dinggang
    Ying, Shihui
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (11) : 3924 - 3935
  • [50] Multimodal Humor Detection Based on Cross-Modal Attention and Modal Maximum Correlation
    Quan, Zhibang
    Sun, Tao
    Su, Mengli
    Wei, Jishu
    2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 1064 - 1065