On Quantizing Implicit Neural Representations

2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)(2023)

引用 1|浏览49
暂无评分
摘要
The role of quantization within implicit/coordinate neural networks is still not fully understood. We note that using a canonical fixed quantization scheme during training produces poor performance at low bit-rates due to the network weight distributions changing over the course of training. In this work, we show that a non-uniform quantization of neural weights can lead to significant improvements. Specifically, we demonstrate that a clustered quantization enables improved reconstruction. Finally, by characterising a trade-off between quantization and network capacity, we demonstrate that it is possible (while memory inefficient) to reconstruct signals using binary neural networks. We demonstrate our findings experimentally on 2D image reconstruction and 3D radiance fields; and show that simple quantization methods and architecture search can achieve compression of NeRF to less than 16kb with minimal loss in performance (323x smaller than the original NeRF).
更多
查看译文
关键词
Algorithms: Computational photography,image and video synthesis,3D computer vision,Machine learning architectures,formulations,and algorithms (including transfer,low-shot,semi-,self-,and un-supervised learning)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要