TY - JOUR
T1 - Designing future warehouse-scale computers for sirius, an end-to-end voice and vision personal assistant
AU - Hauswald, Johann
AU - Laurenzano, Michael A.
AU - Zhang, Yunqi
AU - Yang, Hailong
AU - Kang, Yiping
AU - Li, Cheng
AU - Rovinski, Austin
AU - Khurana, Arjun
AU - Dreslinski, Ronald G.
AU - Mudge, Trevor
AU - Petrucci, Vinicius
AU - Tang, Lingjia
AU - Mars, Jason
N1 - Funding Information:
This article extends the version published at ASPLOS 2015. This work was partially sponsored by Google, ARM, the Defense Advanced Research Projects Agency (DARPA) under agreement HR0011-13-2-000, and the National Science Foundation (NSF) under grants CCF-SHF-1302682 and CNS-CSR-1321047.
Publisher Copyright:
© 2016 ACM.
PY - 2016/4/6
Y1 - 2016/4/6
N2 - As user demand scales for intelligent personal assistants (IPAs) such as Apple's Siri, Google's Google Now, and Microsoft's Cortana, we are approaching the computational limits of current datacenter (DC) architectures. It is an open question how future server architectures should evolve to enable this emerging class of applications, and the lack of an open-source IPA workload is an obstacle in addressing this question. In this article, we present the design of Sirius, an open end-to-end IPA Web-service application that accepts queries in the form of voice and images, and responds with natural language. We then use this workload to investigate the implications of four points in the design space of future accelerator-based server architectures spanning traditional CPUs, GPUs, manycore throughput co-processors, and FPGAS. To investigate future server designs for Sirius, we decompose Sirius into a suite of eight benchmarks (Sirius Suite) comprising the computationally intensive bottlenecks of Sirius. We port Sirius Suite to a spectrum of accelerator platforms and use the performance and power trade-offs across these platforms to perform a total cost of ownership (TCO) analysis of various server design points. In our study, we find that accelerators are critical for the future scalability of IPA services. Our results show that GPU- and FPGA-accelerated servers improve the query latency on average by 8.5× and 15×, respectively. For a given throughput, GPU- and FPGA-accelerated servers can reduce the TCO of DCs by 2.3× and 1.3×, respectively.
AB - As user demand scales for intelligent personal assistants (IPAs) such as Apple's Siri, Google's Google Now, and Microsoft's Cortana, we are approaching the computational limits of current datacenter (DC) architectures. It is an open question how future server architectures should evolve to enable this emerging class of applications, and the lack of an open-source IPA workload is an obstacle in addressing this question. In this article, we present the design of Sirius, an open end-to-end IPA Web-service application that accepts queries in the form of voice and images, and responds with natural language. We then use this workload to investigate the implications of four points in the design space of future accelerator-based server architectures spanning traditional CPUs, GPUs, manycore throughput co-processors, and FPGAS. To investigate future server designs for Sirius, we decompose Sirius into a suite of eight benchmarks (Sirius Suite) comprising the computationally intensive bottlenecks of Sirius. We port Sirius Suite to a spectrum of accelerator platforms and use the performance and power trade-offs across these platforms to perform a total cost of ownership (TCO) analysis of various server design points. In our study, we find that accelerators are critical for the future scalability of IPA services. Our results show that GPU- and FPGA-accelerated servers improve the query latency on average by 8.5× and 15×, respectively. For a given throughput, GPU- and FPGA-accelerated servers can reduce the TCO of DCs by 2.3× and 1.3×, respectively.
KW - Datacenters
KW - Emerging workloads
KW - Intelligent personal assistants
KW - Warehouse-scale computers
UR - http://www.scopus.com/inward/record.url?scp=84966348710&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84966348710&partnerID=8YFLogxK
U2 - 10.1145/2870631
DO - 10.1145/2870631
M3 - Article
AN - SCOPUS:84966348710
SN - 0734-2071
VL - 34
JO - ACM Transactions on Computer Systems
JF - ACM Transactions on Computer Systems
IS - 1
M1 - 2
ER -