Attention Understands Semantic Relations.

International Conference on Language Resources and Evaluation (LREC)(2022)

引用 0|浏览19
暂无评分
摘要
Today, natural language processing heavily relies on pre-trained large language models. Even though such models are criticised for poor interpretability, they still yield state-of-the-art solutions for a wide range of very different tasks. While many probing studies have been conducted to measure the models awareness of grammatical knowledge, semantic probing is less popular. In this work, we introduce a probing pipeline to study how semantic relations are represented in transformer language models. We show that in this task, attention scores express the information about relations similar to the layers' output activations despite their lesser ability to represent surface cues. This supports the hypothesis that attention mechanisms focus not only on syntactic relational information but semantic as well.
更多
查看译文
关键词
ontology extraction, knowledge probing, semantic probing, explainable AI (XAI), language models interpretation, bertology
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要