On the generalization capacity of neural networks during generic multimodal reasoning
CoRR(2024)
摘要
The advent of the Transformer has led to the development of large language
models (LLM), which appear to demonstrate human-like capabilities. To assess
the generality of this class of models and a variety of other base neural
network architectures to multimodal domains, we evaluated and compared their
capacity for multimodal generalization. We introduce a multimodal
question-answer benchmark to evaluate three specific types of
out-of-distribution (OOD) generalization performance: distractor generalization
(generalization in the presence of distractors), systematic compositional
generalization (generalization to new task permutations), and productive
compositional generalization (generalization to more complex tasks structures).
We found that across model architectures (e.g., RNNs, Transformers, Perceivers,
etc.), models with multiple attention layers, or models that leveraged
cross-attention mechanisms between input domains, fared better. Our positive
results demonstrate that for multimodal distractor and systematic
generalization, either cross-modal attention or models with deeper attention
layers are key architectural features required to integrate multimodal inputs.
On the other hand, neither of these architectural features led to productive
generalization, suggesting fundamental limitations of existing architectures
for specific types of multimodal generalization. These results demonstrate the
strengths and limitations of specific architectural components underlying
modern neural models for multimodal reasoning. Finally, we provide Generic COG
(gCOG), a configurable benchmark with several multimodal generalization splits,
for future studies to explore.
更多查看译文
关键词
compositional generalization,compositionality,representation learning,out-of-distribution generalization,multimodal reasoning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要