ParKQ: An automated Paraphrase ranKing Quality measure that balances semantic similarity with lexical diversity

Thanh Duong, Tuan-Dung Le, Ho’omana Nathan Horton,Stephanie Link,Thanh Thieu

Natural Language Processing Journal(2024)

引用 0|浏览0
暂无评分
摘要
BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have set new state-of-the-art performance on paraphrase quality measurement. However, their main focus is on semantic similarity and lack the lexical diversity between two sentences. LexDivPara (Thieu et al., 2022) introduced a method that combines semantic similarity and lexical diversity, but the method is dependent on a human-provided semantic score to enhance its overall performance. In this work, we present ParKQ (Paraphrase ranKing Quality), a fully automatic method for measuring the holistic quality of sentential paraphrases. We create a semantic similarity ensemble model by combining the most popular adaptation of the pre-trained BERT (Devlin et al., 2019) network: BLEURT (Sellam et al., 2020), BERTSCORE (Zhang et al., 2020) and Sentence-BERT (Reimers et al., 2019). Then we build paraphrase quality learning-to-rank models with XGBoost (Chen et al., 2016) and TFranking (Pasumarthi et al., 2019) by combining the ensemble semantic score with lexical features including edit distance, BLEU, and ROUGE. To analyze and evaluate the intricate paraphrase quality measure, we create a gold-standard dataset using expert linguistic coding. The gold-standard annotation comprises four linguistic scores (semantic, lexical, grammatical, overall) and spans across three heterogeneous datasets commonly used to benchmark paraphrasing tasks: STS Benchmark,11https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark. ParaBank Evaluation22https://github.com/decompositional-semantics-initiative/ParaBank-Eval-Data. and MSR corpus.33https://www.microsoft.com/en-us/download/details.aspx?id=52398. Our ParKQ models demonstrate robust correlation with all linguistic scores, making it the first practical tool for measuring the holistic quality (semantic similarity + lexical diversity) of sentential paraphrases. In evaluation, we compare our models against contemporary methods with the ability to generate holistic quality scores for paraphrases including LexDivPara, ParaScore, and the emergent ChatGPT.
更多
查看译文
关键词
Paraphrase quality evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要