Exploring the Effects of Silent Data Corruption in Distributed Deep Learning Training

2022 IEEE 34th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD)(2022)

引用 0|浏览21
暂无评分
摘要
The profound impact of recent developments in artificial intelligence is unquestionable. The applications of deep learning models are everywhere, from advanced natural language processing to highly accurate prediction of extreme weather. Those models have been continuously increasing in complexity, becoming much more powerful than their original versions. In addition, data to train the models is becoming more available as technological infrastructures sense and collect more readings. Consequently, distributed deep learning training is often times necessary to handle intricate models and massive datasets. Running a distributed training strategy on a supercomputer exposes the models to all the considerations of a large-scale machine; reliability is one of them. As supercomputers integrate a colossal number of components, each fabricated on an ever decreasing feature-size, faults are common during execution of programs. A particular type of fault, silent data corruption, is troublesome because the system does not crash and does not immediately give an evident sign of an error. We set out to explore the effects of that type of faults by inspecting how distributed deep learning training strategies cope with bit-flips that affect their internal data structures. We used checkpoint alteration, a technique that permits the study of this phenomenon on different distributed training platforms and with different deep learning frameworks. We evaluated two distributed learning libraries (Distributed Data Parallel and Horovod) and found out Horovod is slightly more resilient to SDCs. However, fault propagation is similar in both cases, and the model is more sensitive to SDCs than the optimizer.
更多
查看译文
关键词
Deep learning,resilience,checkpoint,neural networks,HDF5,fault injection,high performance computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要