Model-switching: Dealing with fluctuating workloads in machine-learning-as-a-service systems

Jeff Zhang, Sameh Elnikety, Shuayb Zarar, Atul Gupta, Siddharth Garg

Research output: Contribution to conferencePaperpeer-review

Abstract

Machine learning (ML) based prediction models, and especially deep neural networks (DNNs) are increasingly being served in the cloud in order to provide fast and accurate inferences. However, existing service ML serving systems have trouble dealing with fluctuating workloads and either drop requests or significantly expand hardware resources in response to load spikes. In this paper, we introduce Model-Switching, a new approach to dealing with fluctuating workloads when serving DNN models. Motivated by the observation that end-users of ML primarily care about the accuracy of responses that are returned within the deadline (which we refer to as effective accuracy), we propose to switch from complex and highly accurate DNN models to simpler but less accurate models in the presence of load spikes. We show that the flexibility introduced by enabling online model switching provides higher effective accuracy in the presence of fluctuating workloads compared to serving using any single model. We implement Model-Switching within Clipper, a state-of-art DNN model serving system, and demonstrate its advantages over baseline approaches.

Original languageEnglish (US)
StatePublished - 2020
Event12th USENIX Workshop on Hot Topics in Cloud Computing, HotCloud 2020, co-located with USENIX ATC 2020 - Virtual, Online
Duration: Jul 13 2020Jul 14 2020

Conference

Conference12th USENIX Workshop on Hot Topics in Cloud Computing, HotCloud 2020, co-located with USENIX ATC 2020
CityVirtual, Online
Period7/13/207/14/20

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Software

Fingerprint

Dive into the research topics of 'Model-switching: Dealing with fluctuating workloads in machine-learning-as-a-service systems'. Together they form a unique fingerprint.

Cite this