NFP: A No Fine-tuning Pruning Approach for Convolutional Neural Network Compression

2020 3rd International Conference on Artificial Intelligence and Big Data (ICAIBD)(2020)

引用 4|浏览3
暂无评分
摘要
Pruning for Convolutional neural network has proved to be an effective approach to reduce memory and computation. In this paper, we propose a novel pruning approach called NFP(No Fine-tuning Pruning), which calculates the contribution of the pruned convolutional filters to the next convolutional layer and compensates the contribution to the bias of the next convolutional filters. Due to the compensation, the accuracy of pruned network will be almost same as the original network. Therefore, the NFP method can only take a few moment to obtain an accuracy retaining compact network without fine-tuning. We demonstrate the effectiveness of our approach on some CNN models. For ResNet-50, NFP can reduce the number of parameters by 34% and the FLOPs by 37% without loss of accuracy. For object detection task, our approach also achieves an excellent result in YOLO-V3.
更多
查看译文
关键词
convolutional neural network,pruning,object detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要