LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

HOME: A Holistic GPU Memory Management Framework for Deep Learning

Photo by kellysikkema from unsplash

We propose HOlistic MEmory management (HOME), a new framework for performing tensor placements in large DNN training when GPU memory space is not enough. HOME combines tensor swapping with tensor… Click to show full abstract

We propose HOlistic MEmory management (HOME), a new framework for performing tensor placements in large DNN training when GPU memory space is not enough. HOME combines tensor swapping with tensor recomputation to reduce GPU memory footprint. Different from existing work that only considers partial DNN model information, HOME takes the holistic DNN model information into account in tensor placement decisions. More specifically, HOME uses a custom-designed particle swarm optimization algorithm to achieve the globally optimized placement for each tensor of the DNN model with a greatly reduced searching space. This holistic awareness of the whole model information enables HOME to obtain high performance under the given GPU memory constraint. We implement HOME in PyTorch and conduct our experiments using six popular DNN models. Experimental results show that HOME can outperform vDNN and Capuchin by up to 5.7× and 1.3× in throughput. Furthermore, HOME can improve the maximum batch size by up to 2.8× than the original PyTorch and up to 1.3× than Capuchin.

Keywords: gpu memory; memory management; framework; memory; home

Journal Title: IEEE Transactions on Computers
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.