In Search of a Data Transformation That Accelerates Neural Field Training
arxiv(2023)
摘要
Neural field is an emerging paradigm in data representation that trains a
neural network to approximate the given signal. A key obstacle that prevents
its widespread adoption is the encoding speed-generating neural fields requires
an overfitting of a neural network, which can take a significant number of SGD
steps to reach the desired fidelity level. In this paper, we delve into the
impacts of data transformations on the speed of neural field training,
specifically focusing on how permuting pixel locations affect the convergence
speed of SGD. Counterintuitively, we find that randomly permuting the pixel
locations can considerably accelerate the training. To explain this phenomenon,
we examine the neural field training through the lens of PSNR curves, loss
landscapes, and error patterns. Our analyses suggest that the random pixel
permutations remove the easy-to-fit patterns, which facilitate easy
optimization in the early stage but hinder capturing fine details of the
signal.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要