Learning Pooling for Convolutional Neural Network.

Neurocomputing(2017)

引用 177|浏览77
暂无评分
摘要
Convolutional neural networks (CNNs) consist of alternating convolutional layers and pooling layers. The pooling layer is obtained by applying pooling operator to aggregate information within each small region of the input feature channels and then down sampling the results. Typically, hand-crafted pooling operations are used to aggregate information within a region, but they are not guaranteed to minimize the training error. To overcome this drawback, we propose a learned pooling operation obtained by end-to-end training which is called LEAP (LEArning Pooling). Specifically, in our method, one shared linear combination of the neurons in the region is learned for each feature channel (map). In fact, average pooling can be seen as one special case of our method where all the weights are equal. In addition, inspired by the LEAP operation, we propose one simplified convolution operation to replace the traditional convolution which consumes many extra parameters. The simplified convolution greatly reduces the number of parameters while maintaining comparable performance. By combining the proposed LEAP method and the simplified convolution, we demonstrate the state-of-the-art classification performance with moderate parameters on three public object recognition benchmarks: CIFAR10 dataset, CIFAR100 dataset, and ImageNet2012 dataset.
更多
查看译文
关键词
Convolutional Neural Networks,Object Recognition,Learning Pooling,Simplified Convolution
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要