The Price of Selection in Differential Privacy.

COLT(2017)

引用 31|浏览44
暂无评分
摘要
In the differentially private top-$k$ selection problem, we are given a dataset $X {pm 1}^{n times d}$, in which each row belongs to an individual and each column corresponds to some binary attribute, and our goal is to find a set of $k ll d$ columns whose means are approximately as large as possible. Differential privacy requires that our choice of these $k$ columns does not depend too much on any on individualu0027s dataset. This problem can be solved using the well known exponential mechanism and composition properties of differential privacy. In the high-accuracy regime, where we require the error of the selection procedure to be to be smaller than the so-called sampling error $alpha approx sqrt{ln(d)/n}$, this procedure succeeds given a dataset of size $n gtrsim k ln(d)$. We prove a matching lower bound, showing that a dataset of size $n gtrsim k ln(d)$ is necessary for private top-$k$ selection in this high-accuracy regime. Our lower bound is the first to show that selecting the $k$ largest columns requires more data than simply estimating the value of those $k$ columns, which can be done using a dataset of size just $n gtrsim k$.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要