Motor cortex maps articulatory features of speech sounds

被引:466
|
作者
Pulvermüller, F [1 ]
Huss, M [1 ]
Kherif, F [1 ]
Martin, FMDP [1 ]
Hauk, O [1 ]
Shtyrov, Y [1 ]
机构
[1] MRC, Cognit & Brain Sci Unit, Cambridge CB2 2EF, England
基金
英国医学研究理事会;
关键词
cell assembly; functional MRI; perception-action cycle; mirror neurons; phonetic distinctive featue;
D O I
10.1073/pnas.0509989103
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The processing of spoken language has been attributed to areas in the superior temporal lobe, where speech stimuli elicit the greatest activation. However, neurobiological and psycholinguistic models have long postulated that knowledge about the articulatory features of individual phonemes has an important role in their perception and in speech comprehension. To probe the possible involvement of specific motor circuits in the speech-perception process, we used event-related functional MRI and presented experimental subjects with spoken syllables, including [p] and [t] sounds, which are produced by movements of the lips or tongue, respectively. Physically similar nonlinguistic signal-correlated noise patterns were used as control stimuli. In localizer experiments, subjects had to silently articulate the same syllables and, in a second task, move their lips or tongue. Speech perception most strongly activated superior temporal cortex. Crucially, however, distinct motor regions in the precentral gyrus sparked by articulatory movements of the lips and tongue were also differentially activated in a somatotopic manner when subjects listened to the lip- or tongue-related phonemes. This sound-related somatotopic activation in precentral gyrus shows that, during speech perception, specific motor circuits are recruited that reflect phonetic distinctive features of the speech sounds encountered, thus providing direct neuroimaging support for specific links between the phonological mechanisms for speech perception and production.
引用
收藏
页码:7865 / 7870
页数:6
相关论文
共 50 条
  • [31] Aging of Speech Production, From Articulatory Accuracy to Motor Timing
    Tremblay, Pascale
    Deschamps, Isabelle
    Bedard, Pascale
    Tessier, Marie-Helene
    Carrier, Micael
    Thibeault, Melanie
    PSYCHOLOGY AND AGING, 2018, 33 (07) : 1022 - 1034
  • [32] TALKER RECOGNITION BY STATISTICAL FEATURES OF SPEECH SOUNDS
    FURUI, S
    ITAKURA, F
    ELECTRONICS & COMMUNICATIONS IN JAPAN, 1973, 56 (11): : 62 - 71
  • [33] AUDIO FEATURES: NEW SOUNDS OF AMERICAN SPEECH
    Adams, Michael
    AMERICAN SPEECH, 2012, 87 (01) : 3 - 6
  • [34] ACTIVATION OF THE HUMAN AUDITORY-CORTEX BY SPEECH SOUNDS
    HARI, R
    ACTA OTO-LARYNGOLOGICA, 1991, : 132 - 138
  • [35] Acoustic and Articulatory Features of Diphthong Production: A Speech Clarity Study
    Tasko, Stephen M.
    Greilick, Kristin
    JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2010, 53 (01): : 84 - 99
  • [36] Articulatory and Stacked Bottleneck Features for Low Resource Speech Recognition
    Shetty, Vishwas M.
    Sharon, Rini A.
    Abraham, Basil
    Seeram, Tejaswi
    Prakash, Anusha
    Ravi, Nithya
    Umesh, S.
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 3202 - 3206
  • [37] QUALITY ASSESSMENT OF VOICE CONVERTED SPEECH USING ARTICULATORY FEATURES
    Rajpal, Avni
    Shah, Nirmesh J.
    Zaki, Mohammadi
    Patil, Hemant A.
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 5515 - 5519
  • [38] Speech recognition based on a combination of acoustic features with articulatory information
    LU Xugang DANG Jianwu (Japan Advanced Institute of Science and Technology
    ChineseJournalofAcoustics, 2005, (03) : 271 - 279
  • [39] CAN ARTICULATORY BEHAVIOR IN MOTOR SPEECH DISORDERS BE ACCOUNTED FOR BY THEORIES OF NORMAL SPEECH PRODUCTION
    WEISMER, G
    TJADEN, K
    KENT, RD
    JOURNAL OF PHONETICS, 1995, 23 (1-2) : 149 - 164
  • [40] Articulatory features for speech-driven head motion synthesis
    Ben-Youssef, Atef
    Shimodaira, Hiroshi
    Braude, David A.
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 2757 - 2761