Sharper Bounds for ℓ_p Sensitivity Sampling

ICML 2023(2023)

引用 0|浏览19
暂无评分
摘要
In large scale machine learning, random sampling is a popular way to approximate datasets by a small representative subset of examples. In particular, sensitivity sampling is an intensely studied technique which provides provable guarantees on the quality of approximation, while reducing the number of examples to the product of the VC dimension d and the total sensitivity 𝔖 in remarkably general settings. However, guarantees going beyond this general bound of 𝔖 d are known in perhaps only one setting, for ℓ_2 subspace embeddings, despite intense study of sensitivity sampling in prior work. In this work, we show the first bounds for sensitivity sampling for ℓ_p subspace embeddings for p > 2 that improve over the general 𝔖 d bound, achieving a bound of roughly 𝔖^2-2/p for 2更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要