Deep Biaffine Attention for Neural Dependency Parsing

international conference on learning representations(2016)

引用 1311|浏览357
暂无评分
摘要
This paper builds off recent work from Kiperwasser Goldberg (2016) using neural attention in a simple graph-based dependency parser. We use a larger but more thoroughly regularized parser than other recent BiLSTM-based approaches, with biaffine classifiers to predict arcs and labels. Our parser gets state of the art or near state of the art performance on standard treebanks for six different languages, achieving 95.7 English PTB dataset. This makes it the highest-performing graph-based parser on this benchmark—outperforming Kiperwasser Goldberg (2016) by 1.8 2.2 (Kuncoro et al., 2016), which achieves 95.8 which hyperparameter choices had a significant effect on parsing accuracy, allowing us to achieve large gains over other graph-based approaches.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要