Hearing temperatures: employing machine learning for elucidating the cross-modal perception of thermal properties through audition

被引:0
|
作者
Wenger, Mohr [1 ,2 ]
Maimon, Amber [1 ,3 ]
Yizhar, Or [1 ,2 ,4 ]
Snir, Adi [1 ]
Sasson, Yonatan [1 ]
Amedi, Amir [1 ]
机构
[1] Reichman Univ, Baruch Ivcher Inst Brain Cognit & Technol, Baruch Ivcher Sch Psychol, Herzliyya, Israel
[2] Hebrew Univ Jerusalem, Dept Cognit & Brain Sci, Jerusalem, Israel
[3] Ben Gurion Univ Negev, Dept Brain & Cognit Sci, Computat Psychiat & Neurotechnol Lab, Beer Sheva, Israel
[4] Max Planck Inst Human Dev, Res Grp Adapt Memory & Decis Making, Berlin, Germany
来源
FRONTIERS IN PSYCHOLOGY | 2024年 / 15卷
关键词
cross-modal correspondences; multisensory integration; sensory; thermal perception; multimodal; BRAIN; ORIGINS; SOUND; LIPS;
D O I
10.3389/fpsyg.2024.1353490
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
People can use their sense of hearing for discerning thermal properties, though they are for the most part unaware that they can do so. While people unequivocally claim that they cannot perceive the temperature of pouring water through the auditory properties of hearing it being poured, our research further strengthens the understanding that they can. This multimodal ability is implicitly acquired in humans, likely through perceptual learning over the lifetime of exposure to the differences in the physical attributes of pouring water. In this study, we explore people's perception of this intriguing cross modal correspondence, and investigate the psychophysical foundations of this complex ecological mapping by employing machine learning. Our results show that not only can the auditory properties of pouring water be classified by humans in practice, the physical characteristics underlying this phenomenon can also be classified by a pre-trained deep neural network.
引用
收藏
页数:9
相关论文
共 47 条
  • [41] REPRESENTATION LEARNING THROUGH CROSS-MODAL CONDITIONAL TEACHER-STUDENT TRAINING FOR SPEECH EMOTION RECOGNITION
    Srinivasan, Sundararajan
    Huang, Zhaocheng
    Kirchhoff, Katrin
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6442 - 6446
  • [42] Enhancing Cross-Modal Understanding for Audio Visual Scene-Aware Dialog Through Contrastive Learning
    Xu, Feifei
    Zhou, Wang
    Li, Guangzhen
    Zhong, Zheng
    Zhou, Yingchen
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [43] Cross-Modal Cortical Activity in the Brain Can Predict Cochlear Implantation Outcome in Adults: A Machine Learning Study
    Kyong, Jeong-Sug
    Suh, Myung-Whan
    Han, Jae Joon
    Park, Moo Kyun
    Noh, Tae Soo
    Oh, Seung Ha
    Lee, Jun Ho
    JOURNAL OF INTERNATIONAL ADVANCED OTOLOGY, 2021, 17 (05): : 380 - 386
  • [44] Cross-modal associations between vision, touch, and audition influence visual search through top-down attention, not bottom-up capture
    Emily Orchard-Mills
    David Alais
    Erik Van der Burg
    Attention, Perception, & Psychophysics, 2013, 75 : 1892 - 1905
  • [45] Cross-modal associations between vision, touch, and audition influence visual search through top-down attention, not bottom-up capture
    Orchard-Mills, Emily
    Alais, David
    Van der Burg, Erik
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2013, 75 (08) : 1892 - 1905
  • [46] Enhancing Emotion Recognition in Conversation Through Emotional Cross-Modal Fusion and Inter-class Contrastive Learning
    Shi, Haoxiang
    Zhang, Xulong
    Cheng, Ning
    Zhang, Yong
    Yu, Jun
    Xiao, Jing
    Wang, Jianzong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT III, ICIC 2024, 2024, 14877 : 391 - 401
  • [47] Brain-inspired multisensory integration neural network for cross-modal recognition through spatiotemporal dynamics and deep learning
    Yu, Haitao
    Zhao, Quanfa
    COGNITIVE NEURODYNAMICS, 2024, 18 (06) : 3615 - 3628