PANDA: Population Automatic Neural Distributed Algorithm for Deep Leaning

19TH IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2021)(2021)

引用 0|浏览6
暂无评分
摘要
Deep neural network models perform very brightly in the field of artificial intelligence, but their success is affected by hyperparameters, and the learning rate schedule is one of the most important hyperparameters, and the search for the learning rate schedule is often time-consuming and computationally resource-intensive. In this paper, we propose a Population Automatic Neural Distributed Algorithm (PANDA) based on population joint optimization, which uses distributed data parallel deep neural network training to implement a dynamic learning rate schedule optimization strategy based on the population idea, with almost no loss of test accuracy, PANDA is able to dynamically refine the learning rate schedule during model training instead of following the usual suboptimal strategy. We conducted experiments on typical AlexNet, VGG16, and ResNet18 using the Tianhe-3 supercomputing prototype, and the results show that using PANDA to dynamically update the learning rate greatly reduces the learning rate schedule search time while ensuring close performance with the latest population hyperparameter algorithm, and in our experiments, PANDA lead to at max 123.85x speedup, and the experimental results prove the effectiveness and robustness of PANDA.
更多
查看译文
关键词
Deep Learning, Distributed Training, Hyperparameter Search, Data Parallel, Population Algorithm, Learning Rate Search
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要