题目/Title:ANP-I: A 28nm 1.5pJ/SOP Asynchronous Spiking Neural Network Processor Enabling Sub-0.1μJ/sample On-Chip Learning for Edge-AI applications
作者/Author:
Jilin Zhang, Dexuan Huo, Jian Zhang, Jian Zhang, Qi Liu, Liyang Pan, Zhihua Wang, Ning Qiao, Kea-Tiong Tang, Hong Chen
会议/Conference:ISSCC 2023
地点/Location:San Francisco, CA, USA
年份/Issue Date:2023.19-23 Feb.
页码/pages:pp.21-23
摘要/Abstract:
With the development of on-chip learning processors for edge-AI applications, energy efficiency of NN inference and training is more and more critical. As on-chip training energy dominates the energy consumption of edge-AI processors [1], [2], [4], [5], reduction is of paramount importance. Spiking neural networks (SNNs) offer energy-efficient inference and learning compared with convolutional neural networks (CNNs) or deepneural networks (DNNs), but SNN-based processors have three challenges that need to be addressed (Fig. 22.6.1). 1) During on-chip training, some factors involved in ΔW computation are zeros resulting in ΔW=O, leading to redundant ΔW computation and memory access for weight update. 2) After reaching a certain accuracy, more data cannot improve the accuracy significantly, and 95% of the energy is wasted on the unnecessary processing of the input spike events afterwards. 3) In the case of sparse input-spike events, the number of spike events in each time step is different. If spike processing is synchronized by time step, the worst-case scenario needs to be considered. As a result, energy and time are wasted.