Categorical encoding of speech sounds: beyond auditory cortices

biorxiv(2021)

引用 0|浏览0
暂无评分
摘要
What processes lead to categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. We used a binaural integration task, where the inputs to the two ears were complementary so that phonemic identity emerged from their integration into a single percept. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with a meaning-differentiating acoustic feature (third formant) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in the left anterior insula (AI), the left supplementary motor cortex, the left ventral motor cortex and the right motor and somatosensory cortex (M1/S1) represent listeners’ syllable report irrespective of stimulus acoustics. The same areas have been previously implicated in decision-making (AI), response selection (SMA), and response initiation and feedback (M1/S1). Our results indicate that the emergence of categorical speech sounds implicates decision-making mechanisms and auditory-motor transformations acting on sensory inputs. Significance statement A central question in psycholinguistic research is whether speech sounds are neutrally coded as abstract perceptual units that are distinct from the sensory cues from which they are derived. One challenge for most studies of perception is to overcome that perceptual interpretations of sensory stimuli may be confounded by physical properties of the stimuli. Here, we use functional magnetic resonance imaging (fMRI) and multi-voxel pattern analysis (MVPA) to address the question of where in the cerebral cortex syllable percepts emerge during binaural integration. By controlling for physical stimulus acoustics, we find that the perceptual report of syllables arises in higher-order non-auditory cortical areas. This opens up the possibility that these areas determine the syllables we hear. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要