TY - GEN
T1 - Weightless
T2 - 35th International Conference on Machine Learning, ICML 2018
AU - Reagen, Brandon
AU - Gupta, Udit
AU - Adolf, Robert
AU - Mitzenmacher, Michael M.
AU - Rush, Alexander M.
AU - Wei, Gu Yeon
AU - Brooks, David
N1 - Funding Information:
This work was supported in part by the Center for Applications Driving Architectures (ADA), one of six centers of JUMP, a Semiconductor Research Corporation program co-sponsored by DARPA. The work was also partially supported by the U.S. Government, under the DARPA CRAFT and DARPA PERFECT programs. Support was provided in part by NSF-1704834. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. Michael Mitzenmacher was supported in part by NSF grants CNS-1228598, CCF-1320231, CCF-1535795, and CCF-1563710. Brandon Reagen was supported by a Siebel Scholarship. Udit Gupta was supported by the Smith family fellowship.
PY - 2018
Y1 - 2018
N2 - The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding co-designed with weight simplification techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, named Weightless, can compress weights by up to 496 × without loss of model accuracy. This results in up to a 1.51 × improvement over the state-of-the-art.
AB - The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding co-designed with weight simplification techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, named Weightless, can compress weights by up to 496 × without loss of model accuracy. This results in up to a 1.51 × improvement over the state-of-the-art.
UR - http://www.scopus.com/inward/record.url?scp=85057315259&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057315259&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85057315259
T3 - 35th International Conference on Machine Learning, ICML 2018
SP - 6886
EP - 6899
BT - 35th International Conference on Machine Learning, ICML 2018
A2 - Krause, Andreas
A2 - Dy, Jennifer
PB - International Machine Learning Society (IMLS)
Y2 - 10 July 2018 through 15 July 2018
ER -