DragAPart: Learning a Part-Level Motion Prior for Articulated Objects
arxiv(2024)
摘要
We introduce DragAPart, a method that, given an image and a set of drags as
input, can generate a new image of the same object in a new state, compatible
with the action of the drags. Differently from prior works that focused on
repositioning objects, DragAPart predicts part-level interactions, such as
opening and closing a drawer. We study this problem as a proxy for learning a
generalist motion model, not restricted to a specific kinematic structure or
object category. To this end, we start from a pre-trained image generator and
fine-tune it on a new synthetic dataset, Drag-a-Move, which we introduce.
Combined with a new encoding for the drags and dataset randomization, the new
model generalizes well to real images and different categories. Compared to
prior motion-controlled generators, we demonstrate much better part-level
motion understanding.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要