Fast Training of Deep Neural Networks for Speech Recognition
ICASSP(2020)
摘要
Training large, deep neural network acoustic models for speech recognition on large datasets takes a long time on a single GPU, motivating research on parallel training algorithms. We present an approach for training a bidirectional LSTM acoustic model on the 2000-hour Switchboard corpus. The model we train achieves state-of-the-art word error rate, 7.5% on the Hub5-2000 Switchboard test set and 13.1% on the Callhome test set, and scales to an unprecedented 96 learners while employing only 12 global reductions per epoch of training. As our implementation incurs far fewer reductions than prior work, it does not require aggressively optimized communication primitives to reach state-of-the-art performance in a short amount of time. With 48 NVIDIA V100 GPUs training takes 5 hours; with 96 GPUs, training takes around 3 hours.
更多查看译文
关键词
speech recognition,deep neural network acoustic models,single GPU,parallel training algorithms,bidirectional LSTM acoustic model,Switchboard corpus,word error rate,Callhome test,48 NVIDIA V100 GPUs training,time 5.0 hour,time 3.0 hour
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络