Quantization error-based regularization for hardware-aware neural network training

  • Hirose Kazutoshi
    Graduate School of Information Science and Technology, Hokkaido University
  • Uematsu Ryota
    Graduate School of Information Science and Technology, Hokkaido University
  • Ando Kota
    Graduate School of Information Science and Technology, Hokkaido University
  • Ueyoshi Kodai
    Graduate School of Information Science and Technology, Hokkaido University
  • Ikebe Masayuki
    Graduate School of Information Science and Technology, Hokkaido University
  • Asai Tetsuya
    Graduate School of Information Science and Technology, Hokkaido University
  • Motomura Masato
    Graduate School of Information Science and Technology, Hokkaido University
  • Takamaeda-Yamazaki Shinya
    Graduate School of Information Science and Technology, Hokkaido University

Abstract

<p>We propose “QER”, a novel regularization strategy for hardware-aware neural network training. Although quantized neural networks reduce computation power and resource consumption, it also degrades the accuracy due to quantization errors of the numerical representation, which are defined as differences between original numbers and quantized numbers. The QER solves such the problem by appending an additional regularization term based on quantization errors of weights to the loss function. The regularization term forces the quantization errors of weights to be reduced as well as the original loss. We evaluate our method by using MNIST on a simple neural network model. The evaluation results show that the proposed approach achieves higher accuracy than the standard training approach with quantized forward propagation.</p>

Journal

Citations (1)*help

See more

References(11)*help

See more

Related Projects

See more

Details 詳細情報について

Report a problem

Back to top