Characterizing and Optimizing End-to-End Systems for Private Inference

Karthik Garimella, Zahra Ghodsi, Nandan Kumar Jha, Siddharth Garg, Brandon Reagen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In two-party machine learning prediction services, the client's goal is to query a remote server's trained machine learning model to perform neural network inference in some application domain. However, sensitive information can be obtained during this process by either the client or the server, leading to potential collection, unauthorized secondary use, and inappropriate access to personal information. These security concerns have given rise to Private Inference (PI), in which both the client's personal data and the server's trained model are kept confidential. State-of-the-art PI protocols consist of a pre-processing or offline phase and an online phase that combine several cryptographic primitives: Homomorphic Encryption (HE), Secret Sharing (SS), Garbled Circuits (GC), and Oblivious Transfer (OT). Despite the need and recent performance improvements, PI remains largely arcane today and is too slow for practical use. This paper addresses PI's shortcomings with a detailed characterization of a standard high-performance protocol to build foundational knowledge and intuition in the systems community. Our characterization pinpoints all sources of inefficiency-compute, communication, and storage. In contrast to prior work, we consider inference request arrival rates rather than studying individual inferences in isolation and we find that the pre-processing phase cannot be ignored and is often incurred online as there is insufficient downtime to hide pre-compute latency. Finally, we leverage insights from our characterization and propose three optimizations to address the storage (Client-Garbler), computation (layer-parallel HE), and communication (wireless slot allocation) overheads. Compared to the state-of-the-art PI protocol, these optimizations provide a total PI speedup of 1.8 × with the ability to sustain inference requests up to a 2.24 × greater rate. Looking ahead, we conclude our paper with an analysis of future research innovations and their effects and improvements on PI latency.

Original languageEnglish (US)
Title of host publicationASPLOS 2023 - Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems
EditorsTor M. Aamodt, Natalie Enright Jerger, Michael Swift
PublisherAssociation for Computing Machinery
Pages89-104
Number of pages16
ISBN (Electronic)9781450399180
DOIs
StatePublished - Mar 25 2023
Event28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2023 - Vancouver, Canada
Duration: Mar 25 2023Mar 29 2023

Publication series

NameInternational Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS
Volume3

Conference

Conference28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2023
Country/TerritoryCanada
CityVancouver
Period3/25/233/29/23

Keywords

  • cryptography
  • machine learning
  • private inference protocols
  • systems for machine learning

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Characterizing and Optimizing End-to-End Systems for Private Inference'. Together they form a unique fingerprint.

Cite this