• Home
  • Publications
  • Iteratively Training Look-Up Tables for Network Quantization

Research Area

Author

  • Fabien Cardinaux, Stefan Uhlich, Kazuki Yoshiyama, Javier Alonso García, Lukas Mauch, Stephen Tiedemann, Thomas Kemp, Akira Nakamura
  • * External authors

Company

  • Sony Europe B.V.

Venue

  • JSTSP

Date

  • 2020

Share

Iteratively Training Look-Up Tables for Network Quantization

View Publication

Abstract

Operating deep neural networks (DNNs) on devices with limited resources requires the reduction of their memory as well as computational footprint. Popular reduction methods are network quantization or pruning, which either reduce the word length of the network parameters or remove weights from the network if they are not needed. In this article, we discuss a general framework for network reduction which we call Look-Up Table Quantization (LUT-Q). For each layer, we learn a value dictionary and an assignment matrix to represent the network weights. We propose a special solver which combines gradient descent and a one-step k-means update to learn both the value dictionaries and assignment matrices iteratively. This method is very flexible: by constraining the value dictionary, many different reduction problems such as non-uniform network quantization, training of multiplierless networks, network pruning, or simultaneous quantization and pruning can be implemented without changing the solver. This flexibility of the LUT-Q method allows us to use the same method to train networks for different hardware capabilities.

Share