Speech-Based Meaning of Music

被引:0
|
作者
Karbanova, Alice [1 ]
机构
[1] Masaryk Univ, Brno, Czech Republic
关键词
Song perception; Musical semantics; Cognitive functions; Modules; Specificity; VOCAL EXPRESSION; ACOUSTIC CUES; NEURAL BASIS; LANGUAGE; BRAIN; EMOTIONS; CORTEX; RECOGNITION; PERFORMANCE; PERCEPTION;
D O I
10.1007/978-981-97-1549-7_26
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Music, much like language, is a distinctively human and universal faculty engaging various facets of cognition and serving as a valuable medium for exploring cognitive processes. In this chapter, we will delve into the intricate interplay between language and music processing, with a specific focus on the semantic realm. The central inquiry revolves around whether language and music contend for processing resources in the perception and interpretation of a song. A song, amalgamating speech and music, naturally offers a context for comparing the processing of these two expressive forms. Over the past decades, connection between language and music processing has become a central theme in cognitive neurosciences, and here we synthesize their latest insights. The overarching goal of this chapter is to present a broad panorama of the current knowledge of the processes involved in perceiving and comprehending a complex semiotic object, addressing the extent to which music influences the perception of speech, as exemplified by sung lyrics in song listening. We review recent studies probing the music-language interface, exploring related issues such as domain specificity and overlapping representations. The accumulated evidence not only suggests the existence of shared neural processing networks for music and language but also implies the presence of independent components in their processing.
引用
收藏
页码:385 / 397
页数:13
相关论文
共 50 条
  • [21] Speech-Based Automated Cognitive Status Assessment
    Hakkani-Tuer, Dilek
    Vergyri, Dimitra
    Tur, Gokhan
    11TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2010 (INTERSPEECH 2010), VOLS 1-2, 2010, : 258 - +
  • [22] Contemporary Reflections on Speech-Based Language Learning
    Gustafson, Marianne
    VOLTA REVIEW, 2009, 109 (2-3) : 143 - 153
  • [23] Automatic Speech-Based Smoking Status Identification
    Ma, Zhizhong
    Singh, Satwinder
    Qiu, Yuanhang
    Hou, Feng
    Wang, Ruili
    Bullen, Christopher
    Chu, Joanna Ting Wai
    INTELLIGENT COMPUTING, VOL 3, 2022, 508 : 193 - 203
  • [24] Speech-Based Activity Recognition for Trauma Resuscitation
    Abdulbaqi, Jalal
    Gu, Yue
    Xu, Zhichao
    Gao, Chenyang
    Marsic, Ivan
    Burd, Randall S.
    2020 8TH IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2020), 2020, : 376 - 383
  • [25] Verifying Human Users in Speech-Based Interactions
    Shirali-Shahreza, Sajad
    Ganjali, Yashar
    Balakrishnan, Ravin
    12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5, 2011, : 1596 - 1599
  • [26] Effect of Reverberation in Speech-based Emotion Recognition
    Zhao, Shujie
    Yang, Yan
    Chen, Jingdong
    2018 IEEE INTERNATIONAL CONFERENCE ON THE SCIENCE OF ELECTRICAL ENGINEERING IN ISRAEL (ICSEE), 2018,
  • [27] An architecture and applications for speech-based accessibility systems
    Turunen, M
    Hakulinen, J
    Räihä, KJ
    Salonen, EP
    Kainulainen, A
    Prusi, P
    IBM SYSTEMS JOURNAL, 2005, 44 (03) : 485 - 504
  • [28] Speech-based cognitive load monitoring system
    Yin, Bo
    Chen, Fang
    Ruiz, Natalie
    Ambikairajah, Eliathamby
    2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, : 2041 - 2044
  • [29] Browsing the web from a speech-based interface
    Poon, J
    Nunn, C
    HUMAN-COMPUTER INTERACTION - INTERACT'01, 2001, : 302 - 309
  • [30] An investigation of speech-based human emotion recognition
    Wang, YJ
    Guan, L
    2004 IEEE 6TH WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, 2004, : 15 - 18