iSEGAN: Improved Speech Enhancement Generative Adversarial Networks

arxiv(2020)

引用 0|浏览3
暂无评分
摘要
Popular neural network-based speech enhancement systems operate on the magnitude spectrogram and ignore the phase mismatch between the noisy and clean speech signals. Conditional generative adversarial networks (cGANs) show promise in addressing the phase mismatch problem by directly mapping the raw noisy speech waveform to the underlying clean speech signal. However, stabilizing and training cGAN systems is difficult and they still fall short of the performance achieved by the spectral enhancement approaches. This paper investigates whether different normalization strategies and one-sided label smoothing can further stabilize the cGAN-based speech enhancement model. In addition, we propose incorporating a Gammatone-based auditory filtering layer and a trainable pre-emphasis layer to further improve the performance of the cGAN framework. Simulation results show that the proposed approaches improve the speech enhancement performance of cGAN systems in addition to yielding improved stability and reduced computational effort.
更多
查看译文
关键词
speech
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要