Music2Dance: Music-driven Dance Generation using WaveNet

arxiv(2020)

引用 8|浏览87
暂无评分
摘要
In this paper, we propose a novel system, named as Music2Dance, for addressing the problem of fully automatic music and choreography. Our key idea is to shift the WaveNet, which is originally designed for speech generation, to the human motion synthesis. To balance the big differences between these two tasks, we propose a novel network structure. Typically, being regarded as the local condition for our network, the music features are first extracted by considering the characteristics of rhythms and melody. In addition, the types of dance are then designed as the global condition for the network. Both of the two conditions are utilized to stabilize the network training. Beyond the network architecture, another main challenge is the lack of data. In order to further tackle the obstacle, we have captured the synchronized music-dance pairs by professional dancers, and thus build a high-quality music-dance pair dataset. Experiments have demonstrated the performance of the proposed system and the proposed method can achieve the state-of-the-art results.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要