On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation
CVPR 2024(2024)
摘要
Recent advances in monocular depth estimation have been made by incorporating
natural language as additional guidance. Although yielding impressive results,
the impact of the language prior, particularly in terms of generalization and
robustness, remains unexplored. In this paper, we address this gap by
quantifying the impact of this prior and introduce methods to benchmark its
effectiveness across various settings. We generate "low-level" sentences that
convey object-centric, three-dimensional spatial relationships, incorporate
them as additional language priors and evaluate their downstream impact on
depth estimation. Our key finding is that current language-guided depth
estimators perform optimally only with scene-level descriptions and
counter-intuitively fare worse with low level descriptions. Despite leveraging
additional data, these methods are not robust to directed adversarial attacks
and decline in performance with an increase in distribution shift. Finally, to
provide a foundation for future research, we identify points of failures and
offer insights to better understand these shortcomings. With an increasing
number of methods using language for depth estimation, our findings highlight
the opportunities and pitfalls that require careful consideration for effective
deployment in real-world settings
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要