Earticle

다운로드

Energy-Efficient SNN Implementation Method Using Zero-spike Prediction

원문정보

초록

영어
Spiking neural networks (SNN), employing eventbased spike computation, can be implemented in hardware where on-chip learning and inference are supported in a powerand area-efficient manner. Although many SNN hardware have been proposed for energy-efficient designs using relatively shallow networks, SNN algorithms that support multi-layer learning need to be implemented in hardware to handle more complex datasets. However, multi-layer learning requires more complicated functions like softmax activation, which makes energy-efficient hardware design difficult. In this paper, we present a zero-spike prediction method to skip the complicated function in the convolution layer. Decomposing the original algorithm, the proposed method skips at least 76.90% of softmax activation operations without classification accuracy degradation.

목차

Abstract
I. INTRODUCTION
II. PRELIMINARIES
A. SNN
B. STDP-Based Multi-layer Learning Algorithm
III. PROPOSED ZERO-SPIKE PREDICTION
IV. EXPERIMENTAL RESULT
V. CONCLUSION
ACKNOWLEDGMENT
REFERENCES

저자

  • Hyeonseong Kim [ SoC Platform Research Center, Korea Electronics Technology Institute ]
  • Byung-Soo Kim [ SoC Platform Research Center, Korea Electronics Technology Institute ]
  • Taeho Hwang [ SoC Platform Research Center, Korea Electronics Technology Institute ] Corresponding Author

참고문헌

자료제공 : 네이버학술정보

    간행물 정보

    • 간행물
      한국차세대컴퓨팅학회 학술대회
    • 간기
      반년간
    • 수록기간
      2021~2025
    • 십진분류
      KDC 566 DDC 004