Data movement overheads caused by the recent explosion in big data applications have made traditional von Neumann architecture fails to tackle big data workloads. Processing in Memory (PIM), where computational… Click to show full abstract
Data movement overheads caused by the recent explosion in big data applications have made traditional von Neumann architecture fails to tackle big data workloads. Processing in Memory (PIM), where computational tasks are performed within the memory, has drawn increasing attention. To present meaningful insights to readers, we divide current PIM paradigm into charge-based and resistance-based categories according to different memory devices. This mini tutorial aims to provide a concise overview of the implementation of PIM schemes, highlighting important macro prototypes in artificial intelligence applications that have been released in the past five years.
               
Click one of the above tabs to view related content.