Proving membership in LLM pretraining data via data watermarks
CoRR(2024)
摘要
Detecting whether copyright holders' works were used in LLM pretraining is
poised to be an important problem. This work proposes using data watermarks to
enable principled detection with only black-box model access, provided that the
rightholder contributed multiple training documents and watermarked them before
public release. By applying a randomly sampled data watermark, detection can be
framed as hypothesis testing, which provides guarantees on the false detection
rate. We study two watermarks: one that inserts random sequences, and another
that randomly substitutes characters with Unicode lookalikes. We first show how
three aspects of watermark design – watermark length, number of duplications,
and interference – affect the power of the hypothesis test. Next, we study how
a watermark's detection strength changes under model and dataset scaling: while
increasing the dataset size decreases the strength of the watermark, watermarks
remain strong if the model size also increases. Finally, we view SHA hashes as
natural watermarks and show that we can robustly detect hashes from
BLOOM-176B's training data, as long as they occurred at least 90 times.
Together, our results point towards a promising future for data watermarks in
real world use.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要