LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

DRL-Based Long-Term Resource Planning for Task Offloading Policies in Multiserver Edge Computing Networks

Photo from wikipedia

Multi-access edge computing (MEC) has been regarded as one of the essential technologies for mobile networks, by providing computing resources and services close to users, thereby, avoiding extra energy consumption… Click to show full abstract

Multi-access edge computing (MEC) has been regarded as one of the essential technologies for mobile networks, by providing computing resources and services close to users, thereby, avoiding extra energy consumption and fitting the low-latency ultra-reliable requirements for emerging 5G applications. Task offloading policy plays a pivotal role in handling offloading requests and maximizing the network computing performance. Most recently developed offloading solutions are designed for instant rewards, therefore, neglecting the long-term computing resource optimization at the edge, which fail to deliver optimized network performance when a significant increase of computing requests appears. In this paper, with the objective of maximizing long-term offloading benefits on delay and energy consumption, task offloading policies are proposed to firstly avoid resource over-distribution through deep reinforcement learning (DRL) based resource reservation and server cooperation, and secondly maximize the average instant reward and the utilization of reserved resources by an optimization-based joint policy consisting of offloading decision, transmission power allocation and resource distribution. The DRL-based joint policy is evaluated in a simulated multi-server edge computing network. Compared to previous solutions, the DRL-based algorithms achieve higher and more reliable overall rewards. Of the implemented three DRL-based algorithms, fully cooperative multi-agent DRL accounts for cooperation between servers, achieving a 70.5% reduction in reward variance and a 13.4% increase in average rewards over 500 continuous operations. Resource balanced policies on long-term rewards help edge networks handle the explosive growth of 5G computing-intensive applications in the future.

Keywords: long term; drl based; resource; task offloading; edge computing

Journal Title: IEEE Transactions on Network and Service Management
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.