EasyScale: Elastic Training with Consistent Accuracy and Improved Utilization on GPUs

SC '23: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis(2023)

引用 0|浏览36
暂无评分
摘要
Distributed synchronized GPU training is commonly used for deep learning. The resource constraint of using a fixed number of GPUs makes large-scale training jobs suffer from long queuing time for resource allocation, and lowers the cluster utilization. Adapting to resource elasticity can alleviate this but often introduces inconsistent model accuracy, due to lacking of capability to decouple model training procedure from resource allocation. We propose EasyScale, an elastic training system that achieves consistent model accuracy under resource elasticity for both homogeneous and heterogeneous GPUs. EasyScale preserves the data-parallel training behaviors strictly, traces the consistency-relevant factors carefully, utilizes the deep learning characteristics for EasyScaleThread abstraction and fast context-switching. To utilize heterogeneous cluster, EasyScale dynamically assigns workers based on the intra-/inter-job schedulers, minimizing load imbalance and maximizing aggregated job throughput. Deployed in an online serving cluster, EasyScale powers the training jobs to utilize idle GPUs opportunistically, improving overall cluster utilization by 62.1%.
更多
查看译文
关键词
Deep Learning,Accuracy Of Model,Resource Allocation,Training Procedure,Job Training,Queuing Time,Deterministic,Deep Learning Models,Number Of Workers,Random Generation,PyTorch,Memory Usage,Development Of Deep Learning,GPU Memory,Jupyter Notebook,Multiple Works,V100 GPU,Makespan,Training Deep Learning Models,Heterogeneous Resources,Context Switching,Software Stack,Checkpointing,P100 GPU,Extra State,Job Completion Time,Batch Size,Deep Learning Training,Deep Learning Framework,Stage 2
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要