TY - GEN
T1 - RTN
T2 - 34th AAAI Conference on Artificial Intelligence, AAAI 2020
AU - Li, Yuhang
AU - Dong, Xin
AU - Zhang, Sai Qian
AU - Bai, Haoli
AU - Chen, Yuanpeng
AU - Wang, Wei
N1 - Publisher Copyright:
Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2020
Y1 - 2020
N2 - To deploy deep neural networks on resource-limited devices, quantization has been widely explored. In this work, we study the extremely low-bit networks which have tremendous speed-up, memory saving with quantized activation and weights. We first bring up three omitted issues in extremely low-bit networks: the squashing range of quantized values; the gradient vanishing during backpropagation and the unexploited hardware acceleration of ternary networks. By reparameterizing quantized activation and weights vector with full precision scale and offset for fixed ternary vector, we decouple the range and magnitude from direction to extenuate above problems. Learnable scale and offset can automatically adjust the range of quantized values and sparsity without gradient vanishing. A novel encoding and computation pattern are designed to support efficient computing for our reparameterized ternary network (RTN). Experiments on ResNet-18 for ImageNet demonstrate that the proposed RTN finds a much better efficiency between bitwidth and accuracy and achieves up to 26.76% relative accuracy improvement compared with state-of-the-art methods. Moreover, we validate the proposed computation pattern on Field Programmable Gate Arrays (FPGA), and it brings 46.46× and 89.17× savings on power and area compared with the full precision convolution.
AB - To deploy deep neural networks on resource-limited devices, quantization has been widely explored. In this work, we study the extremely low-bit networks which have tremendous speed-up, memory saving with quantized activation and weights. We first bring up three omitted issues in extremely low-bit networks: the squashing range of quantized values; the gradient vanishing during backpropagation and the unexploited hardware acceleration of ternary networks. By reparameterizing quantized activation and weights vector with full precision scale and offset for fixed ternary vector, we decouple the range and magnitude from direction to extenuate above problems. Learnable scale and offset can automatically adjust the range of quantized values and sparsity without gradient vanishing. A novel encoding and computation pattern are designed to support efficient computing for our reparameterized ternary network (RTN). Experiments on ResNet-18 for ImageNet demonstrate that the proposed RTN finds a much better efficiency between bitwidth and accuracy and achieves up to 26.76% relative accuracy improvement compared with state-of-the-art methods. Moreover, we validate the proposed computation pattern on Field Programmable Gate Arrays (FPGA), and it brings 46.46× and 89.17× savings on power and area compared with the full precision convolution.
UR - http://www.scopus.com/inward/record.url?scp=85106629118&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85106629118&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85106629118
T3 - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
SP - 4780
EP - 4787
BT - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
PB - AAAI press
Y2 - 7 February 2020 through 12 February 2020
ER -