Nowadays, artificial intelligence (AI) technologies, especially deep neural networks (DNNs), play an vital role in solving many problems in both academia and industry. In order to simultaneously meet the demand… Click to show full abstract
Nowadays, artificial intelligence (AI) technologies, especially deep neural networks (DNNs), play an vital role in solving many problems in both academia and industry. In order to simultaneously meet the demand of performance, energy efficiency and flexibility in DNN processing, various reconfigurable AI chips have been proposed in the past several years. They are based on FPGA or CGRA platforms and have domain-specific reconfigurability to customize the computing units and data paths for different DNN tasks without re-produce the chips. This paper surveys typical reconfigurable AI chips from three reconfiguration hierarchies: processing element level, processing element array level, and chip level. Each reconfiguration hierarchy covers a set of important optimization techniques for DNN computation which are frequently adopted in real life. This paper lists the reconfigurable AI chip works in chronological order, discusses the hardware development process for each optimization techniques, and analyzes the necessity of reconfigurability in AI tasks processing. The trends of each reconfiguration hierarchy and insights about the cooperation of techniques from different hierarchies are also proposed.
               
Click one of the above tabs to view related content.