Is articulation-to-speech synthesis language independent? a pilot study

semanticscholar(2019)

引用 0|浏览1
暂无评分
摘要
Articulation-to-speech (ATS) synthesis is to directly synthesize speech from articulatory information, which does not require textual input. ATS has recently shown the potential for assistive technologies such as silent speech interfaces (SSIs). ATS is theoretically language-independent, since there is no dictionary involved. However, to our knowledge, there is no data-based experiment has been conducted to answer this question, due to lack of the multi-language, articulatory movement data from the same speakers. In this study, we conducted speaker-dependent ATS experiments using data collected from bilingual speakers, who speak two of the three languages: English, Spanish, and Korean. The experimental results indicated the performance was degraded if ATS was trained with a language and tested with another language. Interestingly, we observed the performance of ATS for one language could be improved if some samples of another language were added to the training dataset.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要