LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A Graph Neural Network Approach for Caching Performance Optimization in NDN Networks

Photo by jordanmcdonald from unsplash

Named Data Networking (NDN) is a new architecture with in-network caching ability. NDN nodes can cache data packets in their cache store to satisfy further requests. Accurately caching popular content… Click to show full abstract

Named Data Networking (NDN) is a new architecture with in-network caching ability. NDN nodes can cache data packets in their cache store to satisfy further requests. Accurately caching popular content across the network is essential for NDN to reduce the traffic workload and improve network efficiency. However, traditional caching algorithms are not good at predicting future dynamic content popularity. In our paper, we propose a Graph Neural Network-based (GNN-based) caching strategy to optimize the caching performance in NDN. First, our paper utilizes a Convolutional Neural Network (CNN) to extract time-series features for each NDN node. Secondly, GNN is applied to make content caching probability predictions for each NDN node. Third, at each NDN node, a cache replacement decision is made based on its content caching probability ranking, and content with high caching probability replaces content with low probability. We compare our GNN-based caching strategy with three deep learning-based caching techniques, which are 1D-Convolutional Neural Network (1D-CNN), Long Short-Term Memory Encoder-Decoder (LSTM-ED), and Staked Auto Encoder (SAE), and three classical benchmark caching strategies, which are Least Frequently Used (LFU), Least Recently Used (LRU) and First-in-first-out (FIFO). All caching scenarios are simulated in the Mini-NDN platform and evaluated on the tree and arbitrary network topologies. Empirical results suggest that the GNN-based caching approach can achieve around a 50% higher cache hit ratio and a 30% lower latency in the best case than other deep learning-based caching strategies.

Keywords: neural network; network; caching performance; graph neural; based caching

Journal Title: IEEE Access
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.