Evidence for a supra-modal representation of emotion from cross-modal adaptation

被引:25
|
作者
Pye, Annie [1 ]
Bestelmeyer, Patricia E. G. [1 ]
机构
[1] Bangor Univ, Sch Psychol, Bangor LL57 2AS, Gwynedd, Wales
关键词
Supra-modal representation; Cross-modal; Adaptation; Emotion; Voice; NEURAL REPRESENTATIONS; AUDITORY ADAPTATION; VISUAL-ADAPTATION; FACIAL IDENTITY; FACE; EXPRESSION; VOICE; PERCEPTION; SEX; SYSTEM;
D O I
10.1016/j.cognition.2014.11.001
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Successful social interaction hinges on accurate perception of emotional signals. These signals are typically conveyed multi-modally by the face and voice. Previous research has demonstrated uni-modal contrastive aftereffects for emotionally expressive faces or voices. Here we were interested in whether these aftereffects transfer across modality as theoretical models predict. We show that adaptation to facial expressions elicits significant auditory aftereffects. Adaptation to angry facial expressions caused ambiguous vocal stimuli drawn from an anger-fear morphed continuum to be perceived as less angry and more fearful relative to adaptation to fearful faces. In a second experiment, we demonstrate that these aftereffects are not dependent on learned face-voice congruence, i.e. adaptation to one facial identity transferred to an unmatched voice identity. Taken together, our findings provide support for a supra-modal representation of emotion and suggest further that identity and emotion may be processed independently from one another, at least at the supra-modal level of the processing hierarchy. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:245 / 251
页数:7
相关论文
共 50 条
  • [21] HCMSL: Hybrid Cross-modal Similarity Learning for Cross-modal Retrieval
    Zhang, Chengyuan
    Song, Jiayu
    Zhu, Xiaofeng
    Zhu, Lei
    Zhang, Shichao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (01)
  • [22] Enhanced Multimodal Representation Learning with Cross-modal KD
    Chen, Mengxi
    Xing, Linyu
    Wang, Yu
    Zhang, Ya
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11766 - 11775
  • [23] Cross-modal Representation Learning with Nonlinear Dimensionality Reduction
    Kaya, Semih
    Vural, Elif
    2019 27TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2019,
  • [24] Cross-modal hashing retrieval with compatible triplet representation
    Hao, Zhifeng
    Jin, Yaochu
    Yan, Xueming
    Wang, Chuyue
    Yang, Shangshang
    Ge, Hong
    NEUROCOMPUTING, 2024, 602
  • [25] Representation separation adversarial networks for cross-modal retrieval
    Deng, Jiaxin
    Ou, Weihua
    Gou, Jianping
    Song, Heping
    Wang, Anzhi
    Xu, Xing
    WIRELESS NETWORKS, 2024, 30 (05) : 3469 - 3481
  • [26] Learning Cross-Modal Aligned Representation With Graph Embedding
    Zhang, Youcai
    Cao, Jiayan
    Gu, Xiaodong
    IEEE ACCESS, 2018, 6 : 77321 - 77333
  • [27] Cross-modal Representation Learning for Understanding Manufacturing Procedure
    Hashimoto, Atsushi
    Nishimura, Taichi
    Ushiku, Yoshitaka
    Kameko, Hirotaka
    Mori, Shinsuke
    CROSS-CULTURAL DESIGN-APPLICATIONS IN LEARNING, ARTS, CULTURAL HERITAGE, CREATIVE INDUSTRIES, AND VIRTUAL REALITY, CCD 2022, PT II, 2022, 13312 : 44 - 57
  • [28] Towards Cross-Modal Causal Structure and Representation Learning
    Mao, Haiyi
    Liu, Hongfu
    Dou, Jason Xiaotian
    Benos, Panayiotis V.
    MACHINE LEARNING FOR HEALTH, VOL 193, 2022, 193 : 120 - 140
  • [29] Variational Deep Representation Learning for Cross-Modal Retrieval
    Yang, Chen
    Deng, Zongyong
    Li, Tianyu
    Liu, Hao
    Liu, Libo
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2021, PT II, 2021, 13020 : 498 - 510
  • [30] Task Switching, Modality Compatibility, and the Supra-Modal Function of Eye Movements
    Stephan, Denise Nadine
    Koch, Iring
    Hendler, Jessica
    Huestegge, Lynn
    EXPERIMENTAL PSYCHOLOGY, 2013, 60 (02) : 90 - 99