A Joint Modeling of Vision-Language-Action for Target-oriented Grasping in Clutter

arxiv(2023)

引用 6|浏览50
暂无评分
摘要
We focus on the task of language-conditioned grasping in clutter, in which a robot is supposed to grasp the target object based on a language instruction. Previous works separately conduct visual grounding to localize the target object, and generate a grasp for that object. However, these works require object labels or visual attributes for grounding, which calls for handcrafted rules in planner and restricts the range of language instructions. In this paper, we propose to jointly model vision, language and action with object-centric representation. Our method is applicable under more flexible language instructions, and not limited by visual grounding error. Besides, by utilizing the powerful priors from the pre-trained multi-modal model and grasp model, sample efficiency is effectively improved and the sim2real problem is relived without additional data for transfer. A series of experiments carried out in simulation and real world indicate that our method can achieve better task success rate by less times of motion under more flexible language instructions. Moreover, our method is capable of generalizing better to scenarios with unseen objects and language instructions.
更多
查看译文
关键词
clutter,flexible language instructions,language instruction,language-conditioned grasping,object labels,object-centric representation,pre-trained multimodal model,sim2real problem,target object,target-oriented grasping,unseen objects,vision-language-action,visual attributes,visual grounding error
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要