History Dependent Significance Coding for Incremental Neural Network Compression.

ICIP(2022)

引用 0|浏览21
暂无评分
摘要
This paper presents an improved probability estimation scheme for the entropy coder of Incremental Neural Network Coding (INNC), which is currently under standardization in ISO/IEC MPEG. More specifically, the paper first analyzes the compression performance of INNC and how the bitstream size relates to the neural network (NN) layers. For the layers requiring the most bits, it analyzes the coded NN weight updates and their temporal dependencies. Major finding is that the probability of a significant (i.e., non-zero) update for a weight can depend considerably on whether the weight has been updated before. Based on this finding, the paper proposes a new probability estimation scheme: Depending on whether a significant update has been received before (i.e., based on the weight's history), the entropy coder models the probability for a current significant update differently. This scheme achieves a bitstream size reduction of about 2% and 1% in a transfer and a federated learning scenario, respectively, without any accuracy loss or significant complexity increase. Therefore, MPEG adopted our history dependent significance probability (HDSP) scheme to its emerging standard for INNC.
更多
查看译文
关键词
incremental neural network compression, entropy coding, probability estimation, machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要