• Ao Ren
    Syracuse University, Syracuse, NY, USA
  • Zhe Li
    Syracuse University, Syracuse, NY, USA
  • Caiwen Ding
    Syracuse University, Syracuse, NY, USA
  • Qinru Qiu
    Syracuse University, Syracuse, NY, USA
  • Yanzhi Wang
    Syracuse University, Syracuse, NY, USA
  • Ji Li
    University of Southern California, Los Angeles, CA, USA
  • Xuehai Qian
    University of Southern California, Los Angeles, CA, USA
  • Bo Yuan
    City University of New York, New York, NY, USA

書誌事項

タイトル別名
  • Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing

抄録

<jats:p>With the recent advance of wearable devices and Internet of Things (IoTs), it becomes attractive to implement the Deep Convolutional Neural Networks (DCNNs) in embedded and portable systems. Currently, executing the software-based DCNNs requires high-performance servers, restricting the widespread deployment on embedded and mobile IoT devices. To overcome this obstacle, considerable research efforts have been made to develop highly-parallel and specialized DCNN accelerators using GPGPUs, FPGAs or ASICs.</jats:p> <jats:p>Stochastic Computing (SC), which uses a bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power (energy) and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources allow immense design space for enhancing scalability and robustness for hardware DCNNs.</jats:p> <jats:p> This paper presents SC-DCNN, the first comprehensive design and optimization framework of SC-based DCNNs, using a bottom-up approach. We first present the designs of function blocks that perform the basic operations in DCNN, including inner product, pooling, and activation function. Then we propose four designs of feature extraction blocks, which are in charge of extracting features from input feature maps, by connecting different basic function blocks with joint optimization. Moreover, the efficient weight storage methods are proposed to reduce the area and power (energy) consumption. Putting all together, with feature extraction blocks carefully selected, SC-DCNN is holistically optimized to minimize area and power (energy) consumption while maintaining high network accuracy. Experimental results demonstrate that the LeNet5 implemented in SC-DCNN consumes only 17 <jats:italic>mm</jats:italic> <jats:sup>2</jats:sup> area and 1.53 W power, achieves throughput of 781250 images/s, area efficiency of 45946 images/s/ <jats:italic>mm</jats:italic> <jats:sup>2</jats:sup> , and energy efficiency of 510734 images/J. </jats:p>

収録刊行物

  • ACM SIGPLAN Notices

    ACM SIGPLAN Notices 52 (4), 405-418, 2017-04-04

    Association for Computing Machinery (ACM)

被引用文献 (1)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ