Exploring Perceptual Limitation of Multimodal Large Language Models
CoRR(2024)
摘要
Multimodal Large Language Models (MLLMs) have recently shown remarkable
perceptual capability in answering visual questions, however, little is known
about the limits of their perception. In particular, while prior works have
provided anecdotal evidence of MLLMs' sensitivity to object size, this
phenomenon and its underlying causes have not been explored comprehensively. In
this work, we quantitatively study the perception of small visual objects in
several state-of-the-art MLLMs and reveal a pervasive limitation in answering
questions about small objects in images. Next, we identify four independent
factors that can contribute to this limitation – object quality, size,
distractors, and location – and conduct controlled intervention studies to
measure the effect of each factor on MLLMs' perception. In particular, we find
that lower object quality and smaller object size can both independently reduce
MLLMs' ability to answer visual questions. More surprisingly, we find that the
location of the object in the image and the presence of visual distractors can
also significantly reduce MLLMs' question answering accuracy. Our study
provides a better understanding of the perceptual limitation of MLLMs and
contributes new evaluation protocols for analyzing the perception of future
MLLMs. To facilitate further investigations, we release our code and data.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要