Advanced Long-Content Speech Recognition With Factorized Neural Transducer
IEEE/ACM Transactions on Audio, Speech, and Language Processing(2024)
摘要
In this paper, we propose two novel approaches, which integrate long-content
information into the factorized neural transducer (FNT) based architecture in
both non-streaming (referred to as LongFNT ) and streaming (referred to as
SLongFNT ) scenarios. We first investigate whether long-content transcriptions
can improve the vanilla conformer transducer (C-T) models. Our experiments
indicate that the vanilla C-T models do not exhibit improved performance when
utilizing long-content transcriptions, possibly due to the predictor network of
C-T models not functioning as a pure language model. Instead, FNT shows its
potential in utilizing long-content information, where we propose the LongFNT
model and explore the impact of long-content information in both text
(LongFNT-Text) and speech (LongFNT-Speech). The proposed LongFNT-Text and
LongFNT-Speech models further complement each other to achieve better
performance, with transcription history proving more valuable to the model. The
effectiveness of our LongFNT approach is evaluated on LibriSpeech and
GigaSpeech corpora, and obtains relative 19
respectively. Furthermore, we extend the LongFNT model to the streaming
scenario, which is named SLongFNT , consisting of SLongFNT-Text and
SLongFNT-Speech approaches to utilize long-content text and speech information.
Experiments show that the proposed SLongFNT model achieves relative 26
WER reduction on LibriSpeech and GigaSpeech respectively while keeping a good
latency, compared to the FNT baseline. Overall, our proposed LongFNT and
SLongFNT highlight the significance of considering long-content speech and
transcription knowledge for improving both non-streaming and streaming speech
recognition systems.
更多查看译文
关键词
long-form speech recognition,streaming and non-streaming,factorized neural transducer,RNN-T
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要