Simultaneity learning in vision, audition, tactile sense and their cross-modal combinations

被引:27
|
作者
Virsu, Veijo [1 ]
Oksanen-Hennah, Henna [1 ]
Vedenpaa, Anita [1 ]
Jaatinen, Pentti [1 ]
Lahti-Nuuttila, Pekka [1 ]
机构
[1] Univ Helsinki, Dept Psychol, FIN-00014 Helsinki, Finland
关键词
psychophysics; temporal processing; perceptual learning; perceptual synchrony; multisensory integration; simultaneity constancy;
D O I
10.1007/s00221-007-1254-z
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Latencies of sensory neurons vary depending on stimulus variables such as intensity, contrast, distance and adaptation. Therefore, different parts of an object and simultaneous environmental events could often elicit non-simultaneous neural representations. However, despite the neural discrepancies of timing, our actions and object perceptions are usually veridical. Recent results suggest that this temporal veridicality is assisted by the so-called simultaneity constancy which actively compensates for neural timing asynchronies. We studied whether a corresponding compensation by simultaneity constancy could be learned in natural interaction with the environment without explicit feedback. Brief stimuli, whose objective simultaneity/non-simultaneity was judged, consisted of flashes, clicks or touches, and their cross-modal combinations. The stimuli were presented as two concurrent trains. Twenty-eight adult participants practised unimodal (visual, auditory and tactile) and cross-modal (audiovisual, audiotactile and visuotactile) simultaneity judgement tasks in eight sessions, two sessions per week. Effects of practice were tested 7 months later. All tasks indicated improved judgements of simultaneity that were also long-lasting. This simultaneity learning did not affect relative temporal resolution (Weber fraction). Transfer of learning between practised tasks was minimal, which suggests that simultaneity learning mechanisms are not centralised but modally specific. Our results suggest that natural perceptual learning can generate simultaneity-constancy-like phenomena in a well-differentiated and long-lasting manner and concomitantly in several sensory systems. Hebbian learning can explain how experience with environmental simultaneity and non-simultaneity can develop the veridicality of perceived synchrony.
引用
收藏
页码:525 / 537
页数:13
相关论文
共 50 条
  • [21] Cross-modal tactile-auditory saltation
    Trojan, Joerg
    Getzmann, Stephan
    Moeller, Johanna
    Kleinboehl, Dieter
    Hoelzl, Rupert
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2008, 43 (3-4) : 339 - 339
  • [22] Cross-Modal Correspondence Among Vision, Audition, and Touch in Natural Objects: An Investigation of the Perceptual Properties of Wood
    Kanaya, Shoko
    Kariya, Kenji
    Fujisaki, Waka
    PERCEPTION, 2016, 45 (10) : 1099 - 1114
  • [23] Cross-modal decoupling in temporal attention between audition and touch
    Muhlberg, Stefanie
    Soto-Faraco, Salvador
    PSYCHOLOGICAL RESEARCH-PSYCHOLOGISCHE FORSCHUNG, 2019, 83 (08): : 1626 - 1639
  • [24] Cross-modal decoupling in temporal attention between audition and touch
    Stefanie Mühlberg
    Salvador Soto-Faraco
    Psychological Research, 2019, 83 : 1626 - 1639
  • [25] Lifelong Visual-Tactile Cross-Modal Learning for Robotic Material Perception
    Zheng, Wendong
    Liu, Huaping
    Sun, Fuchun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (03) : 1192 - 1203
  • [26] HCMSL: Hybrid Cross-modal Similarity Learning for Cross-modal Retrieval
    Zhang, Chengyuan
    Song, Jiayu
    Zhu, Xiaofeng
    Zhu, Lei
    Zhang, Shichao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (01)
  • [27] CoCM: Conditional Cross-Modal Learning for Vision-Language Models
    Yang, Juncheng
    Xie, Shuai
    Li, Shuxia
    Cai, Zengyu
    Li, Yijia
    Zhu, Weiping
    ELECTRONICS, 2025, 14 (01):
  • [28] Cross-modal attention modulates tactile subitizing but not tactile numerosity estimation
    Tian, Yue
    Chen, Lihan
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2018, 80 (05) : 1229 - 1239
  • [29] Cross-Modal Concept Learning and Inference for Vision-Language Models
    Zhang, Yi
    Zhang, Ce
    Tang, Yushun
    He, Zhihai
    NEUROCOMPUTING, 2024, 583
  • [30] Cross-modal attention modulates tactile subitizing but not tactile numerosity estimation
    Yue Tian
    Lihan Chen
    Attention, Perception, & Psychophysics, 2018, 80 : 1229 - 1239