LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

FPDeep: Scalable Acceleration of CNN Training on Deeply-Pipelined FPGA Clusters

Photo by victorfreitas from unsplash

Deep convolutional Neural Networks (CNNs) have revolutionized numerous applications, but the demand for ever more performance remains unabated. Scaling CNN computations to larger clusters is generally done by distributing tasks… Click to show full abstract

Deep convolutional Neural Networks (CNNs) have revolutionized numerous applications, but the demand for ever more performance remains unabated. Scaling CNN computations to larger clusters is generally done by distributing tasks in batch mode using methods such as distributed synchronous SGD. Among the issues with this approach is that, to make the distributed cluster work with high utilization, the workload distributed to each node must be large; this implies nontrivial growth in the SGD mini-batch size. In this article we propose a framework, called FPDeep, which uses a hybrid of model and layer parallelism to configure distributed reconfigurable clusters to train CNNs. This approach has numerous benefits. First, the design does not suffer from performance loss due to batch size growth. Second, work and storage are balanced among nodes through novel workload and weight partitioning schemes. Part of the mechanism is the surprising finding that it is preferable to store excess weights in neighboring devices rather than in local off-chip memory. Third, the entire system is a fine-grained pipeline. This leads to high parallelism and utilization and also minimizes the time that features need to be cached while waiting for back-propagation. As a result, storage demand is reduced to the point where only on-chip memory is used for the convolution layers. And fourth, we find that the simplest topology, a 1D array, is preferred for interconnecting the FPGAs thus enabling widespread applicability. We evaluate FPDeep with the Alexnet, VGG-16, and VGG-19 benchmarks. Results show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. But with 250 Gb/s bidirectional bandwidth per FPGA, which is easily supported by current generation FPGAs, FPDeep performance shows linearity up to 100 FPGAs. Energy efficiency is evaluated with respect to GOPs/J. FPDeep provides, on average, 6.4× higher energy efficiency than comparable GPU servers.

Keywords: cnn training; acceleration cnn; scalable acceleration; fpdeep scalable; training deeply; fpgas

Journal Title: IEEE Transactions on Computers
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.