Hearing temperatures: employing machine learning for elucidating the cross-modal perception of thermal properties through audition

被引:0
|
作者
Wenger, Mohr [1 ,2 ]
Maimon, Amber [1 ,3 ]
Yizhar, Or [1 ,2 ,4 ]
Snir, Adi [1 ]
Sasson, Yonatan [1 ]
Amedi, Amir [1 ]
机构
[1] Reichman Univ, Baruch Ivcher Inst Brain Cognit & Technol, Baruch Ivcher Sch Psychol, Herzliyya, Israel
[2] Hebrew Univ Jerusalem, Dept Cognit & Brain Sci, Jerusalem, Israel
[3] Ben Gurion Univ Negev, Dept Brain & Cognit Sci, Computat Psychiat & Neurotechnol Lab, Beer Sheva, Israel
[4] Max Planck Inst Human Dev, Res Grp Adapt Memory & Decis Making, Berlin, Germany
来源
FRONTIERS IN PSYCHOLOGY | 2024年 / 15卷
关键词
cross-modal correspondences; multisensory integration; sensory; thermal perception; multimodal; BRAIN; ORIGINS; SOUND; LIPS;
D O I
10.3389/fpsyg.2024.1353490
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
People can use their sense of hearing for discerning thermal properties, though they are for the most part unaware that they can do so. While people unequivocally claim that they cannot perceive the temperature of pouring water through the auditory properties of hearing it being poured, our research further strengthens the understanding that they can. This multimodal ability is implicitly acquired in humans, likely through perceptual learning over the lifetime of exposure to the differences in the physical attributes of pouring water. In this study, we explore people's perception of this intriguing cross modal correspondence, and investigate the psychophysical foundations of this complex ecological mapping by employing machine learning. Our results show that not only can the auditory properties of pouring water be classified by humans in practice, the physical characteristics underlying this phenomenon can also be classified by a pre-trained deep neural network.
引用
收藏
页数:9
相关论文
共 47 条
  • [21] Cross-modal effects of noise and thermal conditions on indoor environmental perception and speech recognition
    Yang, Wonyoung
    Moon, Hyeun Jun
    APPLIED ACOUSTICS, 2018, 141 : 1 - 8
  • [22] Infant visual perception and beyond: motion, color, object, and face perception, and cross-modal rule learning
    Otsuka, Yumiko
    I-PERCEPTION, 2014, 5 (04): : 458 - 458
  • [23] Deeply Supervised Subspace Learning for Cross-Modal Material Perception of Known and Unknown Objects
    Xiong, Pengwen
    Liao, Junjie
    Zhou, MengChu
    Song, Aiguo
    Liu, Peter X.
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (02) : 2259 - 2268
  • [24] Cross-Modal Plasticity in Postlingual Hearing Loss Predicts Speech Perception Outcomes After Cochlear Implantation
    avila-Cascajares, Fatima
    Waleczek, Clara
    Kerres, Sophie
    Suchan, Boris
    Voelter, Christiane
    JOURNAL OF CLINICAL MEDICINE, 2024, 13 (23)
  • [25] Integration of haptic and visual size cues in perception and action revealed through cross-modal conflict
    Charles E. Pettypiece
    Melvyn A. Goodale
    Jody C. Culham
    Experimental Brain Research, 2010, 201 : 863 - 873
  • [26] Integration of haptic and visual size cues in perception and action revealed through cross-modal conflict
    Pettypiece, Charles E.
    Goodale, Melvyn A.
    Culham, Jody C.
    EXPERIMENTAL BRAIN RESEARCH, 2010, 201 (04) : 863 - 873
  • [27] Seeing voices and hearing voices: Learning discriminative embeddings using cross-modal self-supervision
    Chung, Soo-Whan
    Kang, Hong-Goo
    Chung, Joon Son
    INTERSPEECH 2020, 2020, : 3486 - 3490
  • [28] Oscillatory Properties of Functional Connections Between Sensory Areas Mediate Cross-Modal Illusory Perception
    Cooke, Jason
    Poch, Claudia
    Gillmeister, Helge
    Costantini, Marcello
    Romei, Vincenzo
    JOURNAL OF NEUROSCIENCE, 2019, 39 (29): : 5711 - 5718
  • [29] Learning cross-modal spatial transformations through spike timing-dependent plasticity
    Davison, AP
    Frégnac, Y
    JOURNAL OF NEUROSCIENCE, 2006, 26 (21): : 5604 - 5615
  • [30] Pseudo-Wind Perception Induced by Cross-Modal Reproduction of Thermal, Vibrotactile, Visual, and Auditory Stimuli
    Hosoi, Juro
    Ban, Yuki
    Ito, Kenichi
    Warisawa, Shin'Ichi
    IEEE ACCESS, 2023, 11 : 4781 - 4793