LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning
arxiv(2023)
摘要
Generating instructional images of human daily actions from an egocentric
viewpoint serves as a key step towards efficient skill transfer. In this paper,
we introduce a novel problem – egocentric action frame generation. The goal is
to synthesize an image depicting an action in the user's context (i.e., action
frame) by conditioning on a user prompt and an input egocentric image. Notably,
existing egocentric action datasets lack the detailed annotations that describe
the execution of actions. Additionally, existing diffusion-based image
manipulation models are sub-optimal in controlling the state transition of an
action in egocentric image pixel space because of the domain gap. To this end,
we propose to Learn EGOcentric (LEGO) action frame generation via visual
instruction tuning. First, we introduce a prompt enhancement scheme to generate
enriched action descriptions from a visual large language model (VLLM) by
visual instruction tuning. Then we propose a novel method to leverage image and
text embeddings from the VLLM as additional conditioning to improve the
performance of a diffusion model. We validate our model on two egocentric
datasets – Ego4D and Epic-Kitchens. Our experiments show substantial
improvement over prior image manipulation models in both quantitative and
qualitative evaluation. We also conduct detailed ablation studies and analysis
to provide insights in our method. More details of the dataset and code are
available on the website (https://bolinlai.github.io/Lego_EgoActGen/).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要