TY - CONF
T1 - Weightless
T2 - 6th International Conference on Learning Representations, ICLR 2018
AU - Reagen, Brandon
AU - Gupta, Udit
AU - Adolf, Robert
AU - Mitzenmacher, Michael M.
AU - Rush, Alexander M.
AU - Wei, Gu Yeon
AU - Brooks, David
N1 - Funding Information:
This work was partially supported by C-FAR, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA. This research was, in part, funded by the U.S. Government under the DARPA CRAFT and PERFECT programs (Contract #: HR0011-13-C-0022). Intel Corporation also provided support. Brandon Reagen acknowledges support from the Siebel Scholarship.
Publisher Copyright:
© 6th International Conference on Learning Representations, ICLR 2018 - Workshop Track Proceedings. All rights reserved.
PY - 2018
Y1 - 2018
N2 - The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496× with the same model accuracy. This results in up to a 1.51× improvement over the state-of-the-art.
AB - The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496× with the same model accuracy. This results in up to a 1.51× improvement over the state-of-the-art.
UR - http://www.scopus.com/inward/record.url?scp=85083951194&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85083951194&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85083951194
Y2 - 30 April 2018 through 3 May 2018
ER -