谷歌浏览器插件
订阅小程序
在清言上使用

Privacy-Preserving SGD on Shuffle Model

Journal of mathematics(2023)

引用 0|浏览3
暂无评分
摘要
In this paper, we consider an exceptional study of differentially private stochastic gradient descent (SGD) algorithms in the stochastic convex optimization (SCO). The majority of the existing literature requires that the losses have additional assumptions, such as the loss functions with Lipschitz, smooth and strongly convex, and uniformly bounded of the model parameters, or focus on the Euclidean (i.e. l(2)(d)) setting. However, these restrictive requirements exclude many popular losses, including the absolute loss and the hinge loss. By loosening the restrictions, we proposed two differentially private SGD without shuffle model and with shuffle model algorithms (in short, DP-SGD-NOS and DP-SGD-S) for the (a,L) -Holder smooth loss by adding calibrated Laplace noise under no shuffling scheme and shuffling scheme in the l(p)(d)-setting for p ? [1,2]. We provide privacy guarantees by using advanced composition and privacy amplification techniques. We also analyze the convergence bounds of the DP-SGD-NOS and DP-SGD-S and obtain the optimal excess population risks O(1/vn+vd log1/d/ne ) and O(1/vn+ vd log(1/d)log(n/d)/n((4+a)/(2(1+a)))e) up to logarithmic factors with gradient complexity O(n((2-a/1+a))+n). It turns out that the optimal utility bound with the shuffle model is superior to the bound without the shuffle model, which is consistent with the previous work. In addition, the DP-SGD-S achieves the optimal utility bound with the O(n) gradient computations of linearity for a = 1/2. There is a significant tradeoff between a,L-Holder smooth losses and gradient complexity for differential privacy of SGD without shuffle model and with shuffle model.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要