Processing-in-memory (PIM) architecture has been proposed to accelerate state-of-the-art neuro-inspired algorithms, such as deep neural networks. In this article, we present PIMulator-NN, an event-driven, cross-level simulation framework for PIM-based neural… Click to show full abstract
Processing-in-memory (PIM) architecture has been proposed to accelerate state-of-the-art neuro-inspired algorithms, such as deep neural networks. In this article, we present PIMulator-NN, an event-driven, cross-level simulation framework for PIM-based neural network accelerators. By employing an event-driven simulation mechanism, PIMulator-NN is able to model architecture details and capture design details of the architecture. Moreover, we integrate the main-stream circuit-level simulation framework with PIMulator-NN to accurately simulate the area, latency, and energy consumption of analog computation units. To demonstrate the usage of PIMulator-NN, we implement several PIM designs with PIMulator-NN and perform detailed simulation. The simulation results show that memory access and interconnects make considerable impacts on system-level performance and energy. Note that such results are hard to be captured by conventional performance model-based estimations. We found some anti common-sense results while modeling the architecture details with PIMulator-NN. With several architecture templates, PIMulator-NN provides the users with a platform to build up their PIM architecture quickly. PIMulator-NN is able to capture the impacts of different design choices (e.g., dataflow, interconnect, data parallelism, etc.), and this could enable users to explore their design space efficiently.
               
Click one of the above tabs to view related content.