The 9th International Conference on Next Generation Computing 2023 (2023.12)바로가기
페이지
pp.45-47
저자
Hyeonseong Kim, Byung-Soo Kim, Taeho Hwang
언어
영어(ENG)
URL
https://www.earticle.net/Article/A448115
원문정보
초록
영어
Spiking neural networks (SNN), employing eventbased spike computation, can be implemented in hardware where on-chip learning and inference are supported in a powerand area-efficient manner. Although many SNN hardware have been proposed for energy-efficient designs using relatively shallow networks, SNN algorithms that support multi-layer learning need to be implemented in hardware to handle more complex datasets. However, multi-layer learning requires more complicated functions like softmax activation, which makes energy-efficient hardware design difficult. In this paper, we present a zero-spike prediction method to skip the complicated function in the convolution layer. Decomposing the original algorithm, the proposed method skips at least 76.90% of softmax activation operations without classification accuracy degradation.
목차
Abstract I. INTRODUCTION II. PRELIMINARIES A. SNN B. STDP-Based Multi-layer Learning Algorithm III. PROPOSED ZERO-SPIKE PREDICTION IV. EXPERIMENTAL RESULT V. CONCLUSION ACKNOWLEDGMENT REFERENCES