Quantized Proximal Averaging Networks for Compressed Image Recovery.

CVPR Workshops(2023)

引用 1|浏览9
暂无评分
摘要
We solve the analysis sparse coding problem considering a combination of convex and non-convex sparsity promoting penalties. The multi-penalty formulation results in an iterative algorithm involving proximal-averaging. We then unfold the iterative algorithm into a trainable network that facilitates learning the sparsity prior. We also consider quantization of the network weights. Quantization makes neural networks efficient both in terms of memory and computation during inference, and also renders them compatible for low-precision hardware deployment. Our learning algorithm is based on a variant of the ADAM optimizer in which the quantizer is part of the forward pass and the gradients of the loss function are evaluated corresponding to the quantized weights while doing a book-keeping of the high-precision weights. We demonstrate applications to compressed image recovery and magnetic resonance image reconstruction. The proposed approach offers superior reconstruction accuracy and quality than state-of-the-art unfolding techniques and the performance degradation is minimal even when the weights are subjected to extreme quantization.
更多
查看译文
关键词
analysis sparse coding problem,compressed image recovery,extreme quantization,high-precision weights,iterative algorithm,learning algorithm,low-precision hardware deployment,magnetic resonance image reconstruction,multipenalty formulation results,network weights,neural networks,proximal-averaging,quantized proximal averaging networks,quantized weights,quantizer,state-of-the-art unfolding techniques,trainable network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要