High-Order Areas and Auditory Cortex Both Represent the High-Level Event Structure of Music

被引:11
|
作者
Williams, Jamal A. [1 ]
Margulis, Elizabeth H. [1 ]
Nastase, Samuel A. [1 ]
Chen, Janice [2 ]
Hasson, Uri [1 ]
Norman, Kenneth A. [1 ]
Baldassano, Christopher [3 ]
机构
[1] Princeton Univ, Princeton, NJ 08544 USA
[2] Johns Hopkins Univ, Baltimore, MD USA
[3] Columbia Univ, New York, NY USA
关键词
TEMPORAL RECEPTIVE WINDOWS; BRAIN; LANGUAGE; INTEGRATION; PERCEPTION; HIERARCHY; RESPONSES; NETWORK;
D O I
10.1162/jocn_a_01815
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial prefrontal cortex, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.
引用
收藏
页码:699 / 714
页数:16
相关论文
共 50 条
  • [1] Auditory motion direction encoding in auditory cortex and high-level visual cortex
    Alink, Arjen
    Euler, Felix
    Kriegeskorte, Nikolaus
    Singer, Wolf
    Kohler, Axel
    HUMAN BRAIN MAPPING, 2012, 33 (04) : 969 - 978
  • [2] CFDlang: High-level code generation for high-order methods in fluid dynamics
    Rink, Norman A.
    Huismann, Immo
    Susungi, Adilla
    Castrillon, Jeronimo
    Stiller, Joerg
    Froehlich, Jochen
    Tadonki, Claude
    RWDSL2018: PROCEEDINGS OF THE REAL WORLD DOMAIN SPECIFIC LANGUAGES WORKSHOP 2018, 2018,
  • [3] Neuronal Encoding in a High-Level Auditory Area: From Sequential Order of Elements to Grammatical Structure
    Cazala, Aurore
    Giret, Nicolas
    Edeline, Jean-Marc
    Del Negro, Catherine
    JOURNAL OF NEUROSCIENCE, 2019, 39 (31): : 6150 - 6161
  • [4] Root high-order cumulant MUSIC
    Yue, Yaxing
    Xu, Yougen
    Liu, Zhiwen
    DIGITAL SIGNAL PROCESSING, 2022, 122
  • [5] High-Level Event Mining: A Framework
    Bakullari, Bianka
    van der Aalst, Wil M. P.
    2022 4TH INTERNATIONAL CONFERENCE ON PROCESS MINING (ICPM 2022), 2022, : 136 - 143
  • [6] High-level event identification in social media
    Dashdorj, Zolzaya
    Altangerel, Erdenebaatar
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2019, 31 (03):
  • [7] Level clipped high-order OFDM
    Wulich, D
    Dinur, N
    Glinowiecki, A
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2000, 48 (06) : 928 - 930
  • [8] High-level event recognition in unconstrained videos
    Jiang, Yu-Gang
    Bhattacharya, Subhabrata
    Chang, Shih-Fu
    Shah, Mubarak
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2013, 2 (02) : 73 - 101
  • [9] The topography of high-order human object areas
    Malach, R
    Levy, I
    Hasson, U
    TRENDS IN COGNITIVE SCIENCES, 2002, 6 (04) : 176 - 184
  • [10] High-level cognition during story listening is reflected in high-order dynamic correlations in neural activity patterns
    Lucy L. W. Owen
    Thomas H. Chang
    Jeremy R. Manning
    Nature Communications, 12