Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers
arxiv(2024)
摘要
While large-scale robotic systems typically rely on textual instructions for
tasks, this work explores a different approach: can robots infer the task
directly from observing humans? This shift necessitates the robot's ability to
decode human intent and translate it into executable actions within its
physical constraints and environment. We introduce Vid2Robot, a novel
end-to-end video-based learning framework for robots. Given a video
demonstration of a manipulation task and current visual observations, Vid2Robot
directly produces robot actions. This is achieved through a unified
representation model trained on a large dataset of human video and robot
trajectory. The model leverages cross-attention mechanisms to fuse prompt video
features to the robot's current state and generate appropriate actions that
mimic the observed task. To further improve policy performance, we propose
auxiliary contrastive losses that enhance the alignment between human and robot
video representations. We evaluate Vid2Robot on real-world robots,
demonstrating a 20
video-conditioned policies when using human demonstration videos. Additionally,
our model exhibits emergent capabilities, such as successfully transferring
observed motions from one object to another, and long-horizon composition, thus
showcasing its potential for real-world applications. Project website:
vid2robot.github.io
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要