Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning.

IEEE transactions on neural networks and learning systems(2023)

引用 0|浏览9
暂无评分
摘要
Deep learning has transformed computer vision, natural language processing, and speech recognition. However, two critical questions remain obscure: 1) why do deep neural networks (DNNs) generalize better than shallow networks and 2) does it always hold that a deeper network leads to better performance? In this article, we first show that the expected generalization error of neural networks (NNs) can be upper bounded by the mutual information between the learned features in the last hidden layer and the parameters of the output layer. This bound further implies that as the number of layers increases in the network, the expected generalization error will decrease under mild conditions. Layers with strict information loss, such as the convolutional or pooling layers, reduce the generalization error for the whole network; this answers the first question. However, algorithms with zero expected generalization error do not imply a small test error. This is because the expected training error is large when the information for fitting the data is lost as the number of layers increases. This suggests that the claim "the deeper the better" is conditioned on a small training error. Finally, we show that deep learning satisfies a weak notion of stability and provides some generalization error bounds for noisy stochastic gradient decent (SGD) and binary classification in DNNs.
更多
查看译文
关键词
Deep neural networks (DNNs),generalization,information theory,learning theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要