谷歌浏览器插件
订阅小程序
在清言上使用

Memory Injections: Correcting Multi-Hop Reasoning Failures During Inference in Transformer-Based Language Models

Proceedings of the 6th BlackboxNLP Workshop Analyzing and Interpreting Neural Networks for NLP(2023)

引用 7|浏览75
关键词
Attention Mechanism,Language Modeling,Sequence-to-Sequence Learning,Language Understanding,Semantic Reasoning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要