Modality-Specific Perceptual Learning of Vocoded Auditory versus Lipread Speech: Different Effects of Prior Information

被引:3
|
作者
Bernstein, Lynne E. E. [1 ]
Auer, Edward T. T. [1 ]
Eberhardt, Silvio P. P. [1 ]
机构
[1] George Washington Univ, Speech Language & Hearing Sci Dept, Washington, DC 20052 USA
关键词
speech perception; multisensory; perceptual learning; lipreading; vocoded speech; word learning; spoken language processing; speech perception training; REVERSE HIERARCHIES; AUDIOVISUAL SPEECH; TOP-DOWN; FEEDBACK; RECOGNITION; INTELLIGIBILITY; STIMULATION; ACCURACY; MOVEMENT; SOUNDS;
D O I
10.3390/brainsci13071008
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Traditionally, speech perception training paradigms have not adequately taken into account the possibility that there may be modality-specific requirements for perceptual learning with auditory-only (AO) versus visual-only (VO) speech stimuli. The study reported here investigated the hypothesis that there are modality-specific differences in how prior information is used by normal-hearing participants during vocoded versus VO speech training. Two different experiments, one with vocoded AO speech (Experiment 1) and one with VO, lipread, speech (Experiment 2), investigated the effects of giving different types of prior information to trainees on each trial during training. The training was for four similar to 20 min sessions, during which participants learned to label novel visual images using novel spoken words. Participants were assigned to different types of prior information during training: Word Group trainees saw a printed version of each training word (e.g., "tethon"), and Consonant Group trainees saw only its consonants (e.g., "t_th_n"). Additional groups received no prior information (i.e., Experiment 1, AO Group; Experiment 2, VO Group) or a spoken version of the stimulus in a different modality from the training stimuli (Experiment 1, Lipread Group; Experiment 2, Vocoder Group). That is, in each experiment, there was a group that received prior information in the modality of the training stimuli from the other experiment. In both experiments, the Word Groups had difficulty retaining the novel words they attempted to learn during training. However, when the training stimuli were vocoded, the Word Group improved their phoneme identification. When the training stimuli were visual speech, the Consonant Group improved their phoneme identification and their open-set sentence lipreading. The results are considered in light of theoretical accounts of perceptual learning in relationship to perceptual modality.
引用
收藏
页数:35
相关论文
共 23 条
  • [1] The role of perceptual learning on modality-specific visual attentional effects
    Chirimuuta, A.
    Burr, David
    Morrone, A. Coneetta
    VISION RESEARCH, 2007, 47 (01) : 60 - 70
  • [2] The role of perceptual learning on modality-specific visual attentional effects
    Chirimuuta, M.
    Burr, D. C.
    Morrone, M. C.
    PERCEPTION, 2006, 35 : 7 - 8
  • [3] ABSTRACT VERSUS MODALITY-SPECIFIC MEMORY REPRESENTATIONS IN PROCESSING AUDITORY AND VISUAL SPEECH
    DEGELDER, B
    VROOMEN, J
    MEMORY & COGNITION, 1992, 20 (05) : 533 - 538
  • [4] Modality-Specific Effects of Perceptual Load in Multimedia Processing
    Fisher, Jacob Taylor
    Hopp, Frederic Rene
    Weber, Rene
    MEDIA AND COMMUNICATION, 2019, 7 (04): : 149 - 165
  • [5] Weighted Integration of Duration Information Across Visual and Auditory Modality Is Influenced by Modality-Specific Attention
    Yoshimatsu, Hiroshi
    Yotsumoto, Yuko
    FRONTIERS IN HUMAN NEUROSCIENCE, 2021, 15
  • [6] Modality-Specific Perceptual Expectations Selectively Modulate Baseline Activity in Auditory, Somatosensory, and Visual Cortices
    Langner, Robert
    Kellermann, Thilo
    Boers, Frank
    Sturm, Walter
    Willmes, Klaus
    Eickhoff, Simon B.
    CEREBRAL CORTEX, 2011, 21 (12) : 2850 - 2862
  • [7] Implicit sequence learning using auditory cues leads to modality-specific representations
    Y. Catherine Han
    Paul J. Reber
    Psychonomic Bulletin & Review, 2022, 29 : 541 - 551
  • [8] Implicit sequence learning using auditory cues leads to modality-specific representations
    Han, Y. Catherine
    Reber, Paul J.
    PSYCHONOMIC BULLETIN & REVIEW, 2022, 29 (02) : 541 - 551
  • [9] Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval
    Ueno, Daisuke
    Masumoto, Kouhei
    Sutani, Kouichi
    Iwaki, Sunao
    NEUROREPORT, 2015, 26 (06) : 303 - 308
  • [10] Sensorimotor simulations underlie conceptual representations: Modality-specific effects of prior activation
    Diane Pecher
    René Zeelenberg
    Lawrence W. Barsalou
    Psychonomic Bulletin & Review, 2004, 11 : 164 - 167