LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Embracing the era of neuromorphic computing

Photo from wikipedia

In recent years, deep learning has made tremendous achievements in computer vision, natural language processing, man-machine games and so on, where artificial intelligence can reach or go beyond the level… Click to show full abstract

In recent years, deep learning has made tremendous achievements in computer vision, natural language processing, man-machine games and so on, where artificial intelligence can reach or go beyond the level of human beings. However, behind so many glories, some serious challenges exist in the bottom hardware, hindering the further development of Artificial Intelligence. While the remarkable Moore’s Law becomes slower and computing consumption on von Neumann bottleneck can no longer be afforded, current accelerator chips are difficult to deal with demanding massive data, especially in some power-limited scenes. These significant challenges lead to a natural upsurge for exploring new computing paradigms, i.e. a computational scientific revolution[1]. Such computing paradigm is not expected to replace the von Neumann architecture that has worked well in the past, but forms an important compliment to the previous architecture that can no longer handle with more and more emerging computing problems and applications. e.g. those in big data and artificial intelligence. Candidates for the new computation paradigm include in-memory computing, quantum computing and neuromorphic computing, which can respectively solve some important problems more successfully than classical computing systems, although they have demonstrated only limited scope of application and accuracy to date. Among them, if we want to follow up the victory that deep learning has won and further build a general, efficient and brain-like intelligence, it is suggested to develop a paradigm of neuromorphic computing, which combines architecture, algorithms, circuits and devices tightly. From this view, deep learning is only a precursor to the approaching era of neuromorphic computing. It has been about three decades since Carver Mead got inspiration from human brain and first proposed the concept of neuromorphic computing[2]. It takes advantage of analog signals to imitate electrical properties of synapses and neurons as basic computing elements, and assembles them to functional systems following simplified brain operating rules. Our brains utilize spikes to transmit and process information, running on the edge of chaos, so they have incredibly rich computational dynamics, as well as powerful capabilities for spatiotemporal integration. Since the introduction of neuromorphic computing, many impressive exploratory works have been completed, like IBM’s TrueNorth[3] and Intel’s Loihi[4]. However, a research consensus has not been established regarding neuromorphic computing yet. From the device perspective, obviously synapses and neurons composed by multiple transistors are costly, which restricts further scaling up. Fortunately, some emerging devices such as memristors can imitate synapses and neurons directly with its inner physical dynamics in single cells, thus holding great prospect in neuromorphic hardware. These devices can be compatible with current semiconductor technology, and can be used for construction of both deep learning accelerators and neuromorphic computing systems (Fig. 1). From the algorithm perspective, spike-based neural network models are immature compared with state of the art artificial neural networks on existing benchmarks and tasks[5]. Nevertheless, it should be noticed that existing effective algorithms are all suitable for classic computing systems, and the advancement of neuromorphic computing necessitates its own algorithms and benchmarks. Thus, there is an incommensurable way between these two computing paradigms. Neuromorphic devices are memristive devices essentially that can change resistances through internal physical states and external electrical stimulations, which naturally correspond to synapses with adjustable weights. It has been proved that various emerging devices based on ion migration, phase transition, spin and ferroelectricity can obtain excellent modulation effects. For deep learning accelerators, ideal neuromorphic devices should have high state precision, low variation, long retention, linearity, as well as large dynamic range. However, current neuromorphic devices cannot combine all aspects of the abovementioned performances. For example, memristors based on ion migration have inevitable variations, and devices based on phase transition suffer from conductivity drift. In some interesting cases, these imperfections can be used as computing resource instead. The nonlinearity in conductance modulation can accelerate simulated annealing process in transiently chaotic neural network for the solution of various optimization problems[6]. Moreover, the stochasticity in devices conductance can be utilized as a random matrix in direct feedback alignment, reducing the training cost of neural networks (Fig. 1)[7]. For spike-driven neuromorphic computing involving the coding and representation of time information, neuromorphic devices should have capabilities to process sequential

Keywords: intelligence; neuromorphic devices; deep learning; era neuromorphic; neuromorphic computing

Journal Title: Journal of Semiconductors
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.