Data-Oblivious ML Accelerators using Hardware Security Extensions
CoRR(2024)
摘要
Outsourced computation can put client data confidentiality at risk. Existing
solutions are either inefficient or insufficiently secure: cryptographic
techniques like fully-homomorphic encryption incur significant overheads, even
with hardware assistance, while the complexity of hardware-assisted trusted
execution environments has been exploited to leak secret data.
Recent proposals such as BliMe and OISA show how dynamic information flow
tracking (DIFT) enforced in hardware can protect client data efficiently. They
are designed to protect CPU-only workloads. However, many outsourced computing
applications, like machine learning, make extensive use of accelerators.
We address this gap with Dolma, which applies DIFT to the Gemmini matrix
multiplication accelerator, efficiently guaranteeing client data
confidentiality, even in the presence of malicious/vulnerable software and side
channel attacks on the server. We show that accelerators can allow DIFT logic
optimizations that significantly reduce area overhead compared with
general-purpose processor architectures. Dolma is integrated with the BliMe
framework to achieve end-to-end security guarantees. We evaluate Dolma on an
FPGA using a ResNet-50 DNN model and show that it incurs low overheads for
large configurations (4.4%, 16.7%, 16.5% for performance, resource
usage and power, respectively, with a 32x32 configuration).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要