Encoder Optimizations For The NNR Standard On Neural Network Compression.

ICIP(2021)

引用 4|浏览15
暂无评分
摘要
The novel Neural Network Compression and Representation Standard (NNR), recently issued by ISO/IEC MPEG, achieves very high coding gains, compressing neural networks to 5% in size without accuracy loss. The underlying NNR encoder technology includes parameter quantization, followed by efficient arithmetic coding, namely DeepCABAC. In addition, NNR also allows very flexible adaptations, such as signaling specific local scaling values, setting quantization parameters per tensor rather than per network and supporting specific parameter fusion operations. This paper presents our new approach for optimally deriving these parameters, namely the derivation of parameters for local scaling adaptation (LSA), inference-optimized quantization (IOQ), and batch-norm folding (BNF). By allowing inference and fine tuning within the encoding process, quantization errors are reduced and the NNR coding efficiency is further improved to a compressed bitstream size of only 3% in comparison to the original model size.
更多
查看译文
关键词
MPEG,NNR,DeepCABAC,neural network compression,encoder optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要