A Comparative Study on Multichannel Speaker-Attributed Automatic Speech Recognition in Multi-party Meetings

2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC(2023)

引用 1|浏览42
暂无评分
摘要
Speaker-attributed automatic speech recognition (SA-ASR) in multi-party meeting scenarios is one of the most valuable and challenging ASR tasks. It was shown that Single-Channel Frame-level Diarization with Serialized Output Training (SC-FD-SOT), Single-Channel Word-level Diarization with SOT (SC-WD-SOT), and joint training of Single-Channel Target-Speaker separation and ASR (SC-TS-ASR) can be exploited to partially solve this problem, where the latter achieves the best performance. In this paper, we propose three corresponding MultiChannel (MC) SA-ASR approaches, namely MC-FD-SOT, MC-WD-SOT, and MC-TS-ASR. For different tasks/models, different multichannel data fusion strategies are considered, including channel-level cross-channel attention for MC-FD-SOT, frame-level cross-channel attention for MC-WD-SOT and neural beamforming for MC-TS-ASR. Experimental results on the Al-iMeeting corpus reveal that our proposed multichannel SA-ASR models can consistently outperform the corresponding single-channel counterparts in terms of the speaker-dependent character error rate (SD-CER).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要