Visual localization ability influences cross-modal bias

被引:117
|
作者
Hairston, WD [1 ]
Wallace, MT [1 ]
Vaughan, JW [1 ]
Stein, BE [1 ]
Norris, JL [1 ]
Schirillo, JA [1 ]
机构
[1] Wake Forest Univ, Bowman Gray Sch Med, Dept Neurobiol & Anat, Winston Salem, NC 27157 USA
关键词
D O I
10.1162/089892903321107792
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
The ability of a visual signal to influence the localization of an auditory target (i.e., "cross-modal bias") was examined as a function of the spatial disparity between the two stimuli and their absolute locations in space. Three experimental issues were examined: (a) the effect of a spatially disparate visual stimulus on auditory localization judgments; (b) how the ability to localize visual, auditory, and spatially aligned multisensory (visual-auditory) targets is related to cross-modal bias, and (c) the relationship between the magnitude of cross-modal bias and the perception that the two stimuli are spatially "unified" (i.e., originate from the same location). Whereas variability in localization of auditory targets was large and fairly uniform for all tested locations, variability in localizing visual or spatially aligned multisensory targets was much smaller, and increased with increasing distance from the midline. This trend proved to be strongly correlated with biasing effectiveness, for although visual-auditory bias was unexpectedly large in all conditions tested, it decreased progressively (as localization variability increased) with increasing distance from the midline. Thus, central visual stimuli had a substantially greater biasing effect on auditory target localization than did more peripheral visual stimuli. It was also apparent that cross-modal bias decreased as the degree of visual-auditory disparity increased. Consequently, the greatest visual-auditory biases were obtained with small disparities at central locations. In all cases, the magnitude of these biases covaried with judgments of spatial unity. The results suggest that functional properties of the visual system play the predominant role in determining these visual-auditory interactions and that cross-modal biases can be substantially greater than previously noted.
引用
收藏
页码:20 / 29
页数:10
相关论文
共 50 条
  • [21] Cross-Modal Localization Through Mutual Information
    Alempijevic, Alen
    Kodagoda, Sarath
    Dissanayake, Gamini
    2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 2009, : 5597 - 5602
  • [22] Cross-modal interactions in auditory and visual discrimination
    Marks, LE
    Ben-Artzi, E
    Lakatos, S
    INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, 2003, 50 (1-2) : 125 - 145
  • [23] Cross-Modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization
    Bao, Peijun
    Yang, Wenhan
    Boon Poh Ng
    Er, Meng Hwa
    Kot, Alex C.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 215 - 222
  • [24] Cross-modal correspondence enhances elevation localization in visual-to-auditory sensory substitution
    Bordeau, Camille
    Scalvini, Florian
    Migniot, Cyrille
    Dubois, Julien
    Ambard, Maxime
    FRONTIERS IN PSYCHOLOGY, 2023, 14
  • [25] Cross-Modal Attention Network for Temporal Inconsistent Audio-Visual Event Localization
    Xuan, Hanyu
    Zhang, Zhenyu
    Chen, Shuo
    Yang, Jian
    Yan, Yan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 279 - 286
  • [26] SceneGraphLoc: Cross-Modal Coarse Visual Localization on 3D Scene Graphs
    Miao, Yang
    Engelmann, Francis
    Vysotska, Olga
    Tombari, Federico
    Pollefeys, Marc
    Barath, Daniel Bela
    COMPUTER VISION - ECCV 2024, PT VIII, 2025, 15066 : 127 - 150
  • [27] Cross-Modal Relation-Aware Networks for Audio-Visual Event Localization
    Xu, Haoming
    Zeng, Runhao
    Wu, Qingyao
    Tan, Mingkui
    Gan, Chuang
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 3893 - 3901
  • [28] A Survey of Cross-Modal Visual Content Generation
    Nazarieh, Fatemeh
    Feng, Zhenhua
    Awais, Muhammad
    Wang, Wenwu
    Kittler, Josef
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 6814 - 6832
  • [29] Auditory, visual, and cross-modal negative priming
    Axel Buchner
    Anouk Zabal
    Susanne Mayr
    Psychonomic Bulletin & Review, 2003, 10 : 917 - 923
  • [30] Informative Visual Storytelling with Cross-modal Rules
    Li, Jiacheng
    Shi, Haizhou
    Tang, Siliang
    Wu, Fei
    Zhuang, Yueting
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2314 - 2322