Cheat Codes to Quantify Missing Source Information in Neural Machine Translation

North American Chapter of the Association for Computational Linguistics (NAACL)(2022)

引用 1|浏览33
暂无评分
摘要
This paper describes a method to quantify the amount of information H(t vertical bar s) added by the target sentence t that is not present in the source s in a neural machine translation system. We do this by providing the model the target sentence in a highly compressed form (a "cheat code"), and exploring the effect of the size of the cheat code. We find that the model is able to capture extra information from just a single float representation of the target and nearly reproduces the target with two 32-bit floats per target token.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要