Neural Network Implementations on Titled In-Memory-Computing Systems
This event is free and open to the publicAdd to Google Calendar
Compute-in-Memory (CIM) implemented with Resistive-Random-Access-Memory (RRAM) crossbars is a promising approach for accelerating Convolutional Neural Network (CNN) computations. Implementing modern state-of-the-art CNN models in CIM systems is, however, not without challenges.
Firstly, the growing size in the number of parameters in state-of-the-art CNN models increases the required on-chip weight storage. Therefore we will discuss RRAM-CIM-aware CNN compression techniques. Two promising techniques will be covered, including fine-grain sparse neural networks and tensor train decomposition.
In addition, RRAM-based CIM systems are known to suffer from computational errors. Unlike digital computation, errors in analogy computing would accumulate during the computation. We will discuss the noise tolerance property of CNN models in CIM systems and present guidelines for CNN training for high noise tolerance.
Chair: Professor Wei Lu