Localizing Paragraph Memorization in Language Models
CoRR(2024)
摘要
Can we localize the weights and mechanisms used by a language model to
memorize and recite entire paragraphs of its training data? In this paper, we
show that while memorization is spread across multiple layers and model
components, gradients of memorized paragraphs have a distinguishable spatial
pattern, being larger in lower model layers than gradients of non-memorized
examples. Moreover, the memorized examples can be unlearned by fine-tuning only
the high-gradient weights. We localize a low-layer attention head that appears
to be especially involved in paragraph memorization. This head is predominantly
focusing its attention on distinctive, rare tokens that are least frequent in a
corpus-level unigram distribution. Next, we study how localized memorization is
across the tokens in the prefix by perturbing tokens and measuring the caused
change in the decoding. A few distinctive tokens early in a prefix can often
corrupt the entire continuation. Overall, memorized continuations are not only
harder to unlearn, but also to corrupt than non-memorized ones.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要