TY - GEN
T1 - Characterizing and Optimizing End-to-End Systems for Private Inference
AU - Garimella, Karthik
AU - Ghodsi, Zahra
AU - Jha, Nandan Kumar
AU - Garg, Siddharth
AU - Reagen, Brandon
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/3/25
Y1 - 2023/3/25
N2 - In two-party machine learning prediction services, the client's goal is to query a remote server's trained machine learning model to perform neural network inference in some application domain. However, sensitive information can be obtained during this process by either the client or the server, leading to potential collection, unauthorized secondary use, and inappropriate access to personal information. These security concerns have given rise to Private Inference (PI), in which both the client's personal data and the server's trained model are kept confidential. State-of-the-art PI protocols consist of a pre-processing or offline phase and an online phase that combine several cryptographic primitives: Homomorphic Encryption (HE), Secret Sharing (SS), Garbled Circuits (GC), and Oblivious Transfer (OT). Despite the need and recent performance improvements, PI remains largely arcane today and is too slow for practical use. This paper addresses PI's shortcomings with a detailed characterization of a standard high-performance protocol to build foundational knowledge and intuition in the systems community. Our characterization pinpoints all sources of inefficiency-compute, communication, and storage. In contrast to prior work, we consider inference request arrival rates rather than studying individual inferences in isolation and we find that the pre-processing phase cannot be ignored and is often incurred online as there is insufficient downtime to hide pre-compute latency. Finally, we leverage insights from our characterization and propose three optimizations to address the storage (Client-Garbler), computation (layer-parallel HE), and communication (wireless slot allocation) overheads. Compared to the state-of-the-art PI protocol, these optimizations provide a total PI speedup of 1.8 × with the ability to sustain inference requests up to a 2.24 × greater rate. Looking ahead, we conclude our paper with an analysis of future research innovations and their effects and improvements on PI latency.
AB - In two-party machine learning prediction services, the client's goal is to query a remote server's trained machine learning model to perform neural network inference in some application domain. However, sensitive information can be obtained during this process by either the client or the server, leading to potential collection, unauthorized secondary use, and inappropriate access to personal information. These security concerns have given rise to Private Inference (PI), in which both the client's personal data and the server's trained model are kept confidential. State-of-the-art PI protocols consist of a pre-processing or offline phase and an online phase that combine several cryptographic primitives: Homomorphic Encryption (HE), Secret Sharing (SS), Garbled Circuits (GC), and Oblivious Transfer (OT). Despite the need and recent performance improvements, PI remains largely arcane today and is too slow for practical use. This paper addresses PI's shortcomings with a detailed characterization of a standard high-performance protocol to build foundational knowledge and intuition in the systems community. Our characterization pinpoints all sources of inefficiency-compute, communication, and storage. In contrast to prior work, we consider inference request arrival rates rather than studying individual inferences in isolation and we find that the pre-processing phase cannot be ignored and is often incurred online as there is insufficient downtime to hide pre-compute latency. Finally, we leverage insights from our characterization and propose three optimizations to address the storage (Client-Garbler), computation (layer-parallel HE), and communication (wireless slot allocation) overheads. Compared to the state-of-the-art PI protocol, these optimizations provide a total PI speedup of 1.8 × with the ability to sustain inference requests up to a 2.24 × greater rate. Looking ahead, we conclude our paper with an analysis of future research innovations and their effects and improvements on PI latency.
KW - cryptography
KW - machine learning
KW - private inference protocols
KW - systems for machine learning
UR - http://www.scopus.com/inward/record.url?scp=85159264394&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85159264394&partnerID=8YFLogxK
U2 - 10.1145/3582016.3582065
DO - 10.1145/3582016.3582065
M3 - Conference contribution
AN - SCOPUS:85159264394
T3 - International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS
SP - 89
EP - 104
BT - ASPLOS 2023 - Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems
A2 - Aamodt, Tor M.
A2 - Jerger, Natalie Enright
A2 - Swift, Michael
PB - Association for Computing Machinery
T2 - 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2023
Y2 - 25 March 2023 through 29 March 2023
ER -