Machine translation by projecting text into the same phonetic-orthographic space using a common encoding

CoRR(2023)

引用 0|浏览4
暂无评分
摘要
The use of subword embedding has proved to be a major innovation in Neural machine translation (NMT). It helps NMT to learn better context vectors for Low resource languages (LRLs) so as to predict the target words by better modelling the morphologies of the two languages and also the morphosyntax transfer. Some of the NMT models that achieve state-of-the-art improvement on LRLs are Transformer, BERT, BART, and mBART, which can all use sub-word embeddings. Even so, their performance for translation in Indian language to Indian language scenario is still not as good as for resource-rich languages. One reason for this is the relative morphological richness of Indian languages, while another is that most of them fall into the extremely low resource or zero-shot categories. Since most major Indian languages use Indic or Brahmi origin scripts, the text written in them is highly phonetic in nature and phonetically similar in terms of abstract letters and their arrangements. We use these characteristics of Indian languages and their scripts to propose an approach based on common multilingual Latin-based encoding (WX notation) that takes advantage of language similarity while addressing the morphological complexity issue in NMT. Such multilingual Latin-based encodings in NMT, together with Byte Pair Embedding allow us to better exploit their phonetic and orthographic as well as lexical similarities to improve the translation quality by projecting different but similar languages on the same orthographic-phonetic character space. We verify the proposed approach by demonstrating experiments on similar language pairs (Gujarati ↔ Hindi, Marathi ↔ Hindi, Nepali ↔ Hindi, Maithili ↔ Hindi, Punjabi ↔ Hindi, and Urdu ↔ Hindi) under low resource conditions. The proposed approach shows an improvement in a majority of cases, in one case as much as ∼ 10 BLEU points compared to baseline techniques for similar language pairs. We also get up to ∼ 1 BLEU points improvement on distant and zero-shot language pairs.
更多
查看译文
关键词
Neural machine translation,common phonetic-orthographic space,similar languages,byte pair encoding,transformer model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要