Improving weight clipping in Wasserstein GANs.

ICPR(2022)

引用 0|浏览3
暂无评分
摘要
Weight clipping is a well-known strategy to keep the Lipschitz constant of the critic under control, in Wasserstein GAN training. After each training iteration, all parameters of the critic are clipped to a given box, impacting the progress made by the optimizer. In this work, we propose a new strategy for weight clipping in Wasserstein GANs. Instead of directly clipping the parameters, we first obtain an equivalent model that is closer to the clipping box, and only then clip the parameters. Our motivation is to decrease the impact of the clipping strategy on the objective, at each iteration. This equivalent model is obtained by following invariant curves in the critic loss landscape, whose existence is a consequence of the positive homogeneity of common activations: resealing the input and output signals to each activation by inverse factors preserves the loss. We provide preliminary experiments showing that the proposed strategy speeds up training on Wasserstein GANs with simple feedforward architectures.
更多
查看译文
关键词
wasserstein gans,weight
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要