A Diffusion-ReFinement Model for Sketch-to-Point Modeling

Dan Kong,Qiang Wang, Yongsheng Qi

Lecture Notes in Computer Science(2023)

引用 1|浏览0
暂无评分
摘要
Diffusion probabilistic model has been proven effective in generative tasks. However, its variants have not yet delivered on its effectiveness in practice of cross-dimensional multimodal generation task. Generating 3D models from single free-hand sketches is a typically tricky cross-domain problem that grows even more important and urgent due to the widespread emergence of VR/AR technologies and usage of portable touch screens. In this paper, we introduce a novel Sketch-to-Point Diffusion-ReFinement model to tackle this problem. By injecting a new conditional reconstruction network and a refinement network, we overcome the barrier of multimodal generation between the two dimensions. By explicitly conditioning the generation process on a given sketch image, our method can generate plausible point clouds restoring the sharp details and topology of 3D shapes, also matching the input sketches. Extensive experiments on various datasets show that our model achieves highly competitive performance in sketch-to-point generation task. The code is available at https://github.com/Walterkd/diffusion-refine-sketch2point .
更多
查看译文
关键词
modeling,diffusion-refinement,sketch-to-point
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要