AV-data2vec: Self-supervised Learning of Audio-Visual Speech Representations with Contextualized Target Representations
arXiv (Cornell University)(2023)
Abstract
Self-supervision has shown great potential for audio-visual speech
recognition by vastly reducing the amount of labeled data required to build
good systems. However, existing methods are either not entirely end-to-end or
do not train joint representations of both modalities. In this paper, we
introduce AV-data2vec which addresses these challenges and builds audio-visual
representations based on predicting contextualized representations which has
been successful in the uni-modal case. The model uses a shared transformer
encoder for both audio and video and can combine both modalities to improve
speech recognition. Results on LRS3 show that AV-data2vec consistently
outperforms existing methods under all settings with the same amount of data
and model size.
MoreTranslated text
Key words
contextualized target representations,av-data,self-supervised,audio-visual
PDF
PPT
Code
Data
View via Publisher
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined