Chrome Extension
WeChat Mini Program
Use on ChatGLM

The role of visual speech information in supporting perceptual learning of degraded speech.

JOURNAL OF EXPERIMENTAL PSYCHOLOGY-APPLIED(2012)

Cited 20|Views3
No score
Abstract
Following cochlear implantation, hearing-impaired listeners must adapt to speech as heard through their prosthesis. Visual speech information (VSI; the lip and facial movements of speech) is typically available in everyday conversation. Here, we investigate whether learning to understand a popular auditory simulation of speech as transduced by a cochlear implant (noise-vocoded [NV] speech) is enhanced by the provision of VSI. Experiment 1 demonstrates that provision of VSI concurrently with a clear auditory form of an utterance as feedback after each NV utterance during training does not enhance learning over clear auditory feedback alone, suggesting that VSI does not play a special role in retuning of perceptual representations of speech. Experiment 2 demonstrates that provision of VSI concurrently with NV speech (a simulation of typical real-world experience) facilitates perceptual learning of NV speech, but only when an NV-only repetition of each utterance is presented after the composite NV/VSI form during training. Experiment 3 shows that this more efficient learning of NV speech is probably due to the additional listening effort required to comprehend the utterance when clear feedback is never provided and is not specifically due to the provision of VSI. Our results suggest that rehabilitation after cochlear implantation does not necessarily require naturalistic audiovisual input, but may be most effective when (a) training utterances are relatively intelligible (approximately 85% of words reported correctly during effortful listening), and (b) the individual has the opportunity to map what they know of an utterance's linguistic content onto the degraded form.
More
Translated text
Key words
speech perception,cochlear implant rehabilitation,listening effort,cross-modal integration,audiovisual speech
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined