Language-Image Models with 3D Understanding
arxiv(2024)
摘要
Multi-modal large language models (MLLMs) have shown incredible capabilities
in a variety of 2D vision and language tasks. We extend MLLMs' perceptual
capabilities to ground and reason about images in 3-dimensional space. To that
end, we first develop a large-scale pre-training dataset for 2D and 3D called
LV3D by combining multiple existing 2D and 3D recognition datasets under a
common task formulation: as multi-turn question-answering. Next, we introduce a
new MLLM named Cube-LLM and pre-train it on LV3D. We show that pure data
scaling makes a strong 3D perception capability without 3D specific
architectural design or training objective. Cube-LLM exhibits intriguing
properties similar to LLMs: (1) Cube-LLM can apply chain-of-thought prompting
to improve 3D understanding from 2D context information. (2) Cube-LLM can
follow complex and diverse instructions and adapt to versatile input and output
formats. (3) Cube-LLM can be visually prompted such as 2D box or a set of
candidate 3D boxes from specialists. Our experiments on outdoor benchmarks
demonstrate that Cube-LLM significantly outperforms existing baselines by 21.3
points of AP-BEV on the Talk2Car dataset for 3D grounded reasoning and 17.7
points on the DriveLM dataset for complex reasoning about driving scenarios,
respectively. Cube-LLM also shows competitive results in general MLLM
benchmarks such as refCOCO for 2D grounding with (87.0) average score, as well
as visual question answering benchmarks such as VQAv2, GQA, SQA, POPE, etc. for
complex reasoning. Our project is available at
https://janghyuncho.github.io/Cube-LLM.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要