SAE: Single Architecture Ensemble Neural Networks
CoRR(2024)
摘要
Ensembles of separate neural networks (NNs) have shown superior accuracy and
confidence calibration over single NN across tasks. Recent methods compress
ensembles within a single network via early exits or multi-input multi-output
frameworks. However, the landscape of these methods is fragmented thus far,
making it difficult to choose the right approach for a given task. Furthermore,
the algorithmic performance of these methods is behind the ensemble of separate
NNs and requires extensive architecture tuning. We propose a novel methodology
unifying these approaches into a Single Architecture Ensemble (SAE). Our method
learns the optimal number and depth of exits per ensemble input in a single NN.
This enables the SAE framework to flexibly tailor its configuration for a given
architecture or application. We evaluate SAEs on image classification and
regression across various network architecture types and sizes. We demonstrate
competitive accuracy or confidence calibration to baselines while reducing the
compute operations or parameter count by up to 1.5∼3.7×.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要