On the expressive power of deep neural networks.
international conference on machine learning(2017)
摘要
We study the expressive power of deep neural networks before and aftertraining. Considering neural nets after random initialization, we show thatthree natural measures of expressivity all display an exponential dependenceon the depth of the network. We prove, theoretically and experimentally,that all of these measures are in fact related to a fourth quantity, trajectorylength. This quantity grows exponentially in the depth of the network, andis responsible for the depth sensitivity observed. These results translateto consequences for networks during and after training. They suggest thatparameters earlier in a network have greater influence on its expressive power– in particular, given a layer, its influence on expressivity is determined bythe remaining depth of the network after that layer. This is verified withexperiments on MNIST and CIFAR-10. We also explore the effect of trainingon the input-output map, and find that it trades off between the stabilityand expressivity of the input-output map.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络