Chrome Extension
WeChat Mini Program
Use on ChatGLM

Convergence-Aware Neural Network Training

PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC)(2020)

Cited 4|Views43
No score
Abstract
Training a deep neural network (DNN) is expensive, requiring a large amount of computation time. While the training overhead is high, not all computation in DNN training is equal. Some parameters converge faster and thus their gradient computation may contribute little to the parameter update; in near-stationary points a subset of parameters may change very little. In this paper we exploit the parameter convergence to optimize gradient computation in DNN training. We design a light-weight monitoring technique to track the parameter convergence; we prune the gradient computation stochastically for a group of semantically related parameters, exploiting their convergence correlations. These techniques are efficiently implemented in existing GPU kernels. In our evaluation the optimization techniques substantially and robustly improve the training throughput for four DNN models on three public datasets.
More
Translated text
Key words
convergence-aware neural network training,computation time,training overhead,DNN training,gradient computation,parameter update,parameter convergence,light-weight monitoring technique,semantically related parameters,convergence correlations,DNN models,public datasets
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined