Memory Injections: Correcting Multi-Hop Reasoning Failures During Inference in Transformer-Based Language Models
Proceedings of the 6th BlackboxNLP Workshop Analyzing and Interpreting Neural Networks for NLP(2023)
关键词
Attention Mechanism,Language Modeling,Sequence-to-Sequence Learning,Language Understanding,Semantic Reasoning
AI 理解论文
溯源树
样例

生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要