LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

cuConv: CUDA implementation of convolution for CNN inference

Photo by visuals from unsplash

Convolutions are the core operation of deep learning applications based on Convolutional Neural Networks (CNNs). Current GPU architectures are highly efficient for training and deploying deep CNNs, and are largely… Click to show full abstract

Convolutions are the core operation of deep learning applications based on Convolutional Neural Networks (CNNs). Current GPU architectures are highly efficient for training and deploying deep CNNs, and are largely used in production. State–of–the–art implementations, however, present low efficiency for some commonly used network configurations. In this paper we propose a GPU-based implementation of the convolution operation for CNN inference that favors coalesced accesses, without requiring prior data transformations. Our experiments demonstrate that it yields notable performance improvements in a range of common CNN forward-propagation convolution configurations, with speedups of up to 2.29 × with respect to the best implementation in cuDNN, covering a relevant region in currently existing approaches. This improvement results in speedups of up to 7.4% for CNN online inference use cases.

Keywords: implementation; implementation convolution; cnn inference; convolution

Journal Title: Cluster Computing
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.