Compact: Approximating Complex Activation Functions for Secure Computation
arxiv(2023)
摘要
Secure multi-party computation (MPC) techniques can be used to provide data
privacy when users query deep neural network (DNN) models hosted on a public
cloud. State-of-the-art MPC techniques can be directly leveraged for DNN models
that use simple activation functions such as ReLU. However, these techniques
are ineffective and/or inefficient for the complex and highly non-linear
activation functions used in cutting-edge DNN models.
We present Compact, which produces piece-wise polynomial approximations of
complex AFs to enable their efficient use with state-of-the-art MPC techniques.
Compact neither requires nor imposes any restriction on model training and
results in near-identical model accuracy. To achieve this, we design Compact
with input density awareness and use an application-specific simulated
annealing type optimization to generate computationally more efficient
approximations of complex AFs. We extensively evaluate Compact on four
different machine-learning tasks with DNN architectures that use popular
complex AFs silu, gelu, and mish. Our experimental results show that Compact
incurs negligible accuracy loss while being 2x-5x computationally more
efficient than state-of-the-art approaches for DNN models with large number of
hidden layers. Our work accelerates easy adoption of MPC techniques to provide
user data privacy even when the queried DNN models consist of a number of
hidden layers and trained over complex AFs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要