SSCGAN: speech style conversion based on GAN

Wenyu Mao,Jixing Li,Xiaozhou Guo, Ronxuan Shen,Huaxiang Lu,Zhanzhong Cao, Xinggang Wang, Chi Zhang

International Conference on Algorithms, Microchips and Network Applications(2022)

引用 0|浏览4
暂无评分
摘要
Speech conversion has significant applications in medical, robotics, and other industries. With the rise of deep learning, CycleGAN is widely used in speech conversion technology. However, the existing CycleGAN-based methods do not consider the speech signal’s temporal and spatial features. In addition, the training of CycleGAN is difficult to converge due to the gradient disappearance problem of the generator. We propose SSCGAN, whose generator is a U-shaped encoder-decoder network that extracts the temporal and spatial features by using 1DCNN and 2DCNN in parallel. A feature fusion module based on multi-scale mixed convolution is embedded between encoder and decoder to achieve high-level fusion of spatial features and temporal features. To make the network training more stable and easier to converge, SSCGAN uses Wasserstein distance instead of the original Jensen–Shannon divergence to calculate the distance of the probability distribution, which can alleviate the gradient extinction problem for generators. In addition, SSCGAN utilizes the PatchGAN structure in the discriminator, which considers the samples’ local details by dividing them into different patches. It can improve the discriminative ability of SSCGAN. The experiment results in the nonparallel corpus database VCC 2018 show that SSCGAN is superior to existing methods such as CycleGAN-VC, StarGan-VC. In inter-gender speech conversion, the MSD of SSCGAN is decreased by 0.162 on average compared to other methods, and in intra-gender speech conversion, the MSD is decreased by 0.118 on average. In subjective evaluation, participants also think SSCGAN is the best.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要