Speaking in Wavelet Domain: A Simple and Efficient Approach to Speed up Speech Diffusion Model
CoRR(2024)
摘要
Recently, Denoising Diffusion Probabilistic Models (DDPMs) have attained
leading performances across a diverse range of generative tasks. However, in
the field of speech synthesis, although DDPMs exhibit impressive performance,
their long training duration and substantial inference costs hinder practical
deployment. Existing approaches primarily focus on enhancing inference speed,
while approaches to accelerate training a key factor in the costs associated
with adding or customizing voices often necessitate complex modifications to
the model, compromising their universal applicability. To address the
aforementioned challenges, we propose an inquiry: is it possible to enhance the
training/inference speed and performance of DDPMs by modifying the speech
signal itself? In this paper, we double the training and inference speed of
Speech DDPMs by simply redirecting the generative target to the wavelet domain.
This method not only achieves comparable or superior performance to the
original model in speech synthesis tasks but also demonstrates its versatility.
By investigating and utilizing different wavelet bases, our approach proves
effective not just in speech synthesis, but also in speech enhancement.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要