Evolutionary training of deep neural networks on heterogeneous computing environments.

Annual Conference on Genetic and Evolutionary Computation (GECCO)(2022)

引用 1|浏览3
暂无评分
摘要
Deep neural networks are typically trained using gradient-based optimizers such as error backpropagation. This study proposes a framework based on Evolutionary Algorithms (EAs) to train deep neural networks without gradients. The network parameters, which may vary up to millions, are considered optimization variables. We demonstrate the training of an encoder-decoder segmentation network (U-Net) and Long Short-Term Memory (LSTM) model using (mu + lambda)-ES, Genetic Algorithm, and Particle Swarm Optimization. The framework can train models with forward propagation on machines with different hardware in a cluster computing environment. We compare prediction results from the two models trained using our framework and backpropagation. We show that the neural networks can be trained in less time on CPUs as compared to the training on specialized compute-intensive GPUs.
更多
查看译文
关键词
Neural Networks, Evolutionary Algorithms, Heuristics, Parallelization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要