谷歌浏览器插件
订阅小程序
在清言上使用

Application of Deep Learning Models for Bone-Conducted Speech Signals Extracted in the Form of Bone Conduction Headphones

Ho‐Juhn Song, Siu Fung Yu,Shibo Sun,Woong Ki Jang, Hyang-Hee Hwang,H.J. Kim,Byeong-Hee Kim, H.K. Lee

Han-guk saengsan jejo hakoeji/Journal of the Korean society of manufacturing technology engineers(2024)

引用 0|浏览1
暂无评分
摘要
In this study, we used deep learning to align bone-conducted speech signals with air-conducted speech signals, aiming to replace traditional air conduction microphones in voice-based services capturing surrounding sounds. We fabricated headphones, placing bone conduction microphones on the rami (the branches of a bone in the jaw area), in line with traditional bone conduction headphone configurations. Using LSTM, CNN, and CRNN models, we created databases that aligned bone-conducted speech signals with their air-conducted counterparts and tested them with bone-conducted speech signals captured via our custom-made headphones. The CNN model demonstrated superior performance in accurately distinguishing three English words (“apple,” “hello,” and “pass”), including their voiceless pronunciations. In conclusion, our study shows that deep learning models can effectively use bone-conducted speech signals extracted from the rami for automatic speech recognition (ASR), paving the way for future ASR technology that precisely recognizes only the speaker’s voice.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要