Efficient Grammatical Error Correction Via Multi-Task Training and Optimized Training Schedule.
CoRR(2023)
摘要
Progress in neural grammatical error correction (GEC) is hindered by the lack
of annotated training data. Sufficient amounts of high-quality manually
annotated data are not available, so recent research has relied on generating
synthetic data, pretraining on it, and then fine-tuning on real datasets;
performance gains have been achieved either by ensembling or by using huge
pretrained models such as XXL-T5 as the backbone. In this work, we explore an
orthogonal direction: how to use available data more efficiently. First, we
propose auxiliary tasks that exploit the alignment between the original and
corrected sentences, such as predicting a sequence of corrections. We formulate
each task as a sequence-to-sequence problem and perform multi-task training.
Second, we discover that the order of datasets used for training and even
individual instances within a dataset may have important effects on the final
performance, so we set out to find the best training schedule. Together, these
two ideas lead to significant improvements, producing results that improve
state of the art with much smaller models; in particular, we outperform the
best models based on T5-XXL (11B parameters) with a BART-based model (400M
parameters).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要