Mobile edge computing is a new computing model, which pushes cloud computing power from centralized cloud to network edge. However, with the sinking of computing power, user mobility brings new… Click to show full abstract
Mobile edge computing is a new computing model, which pushes cloud computing power from centralized cloud to network edge. However, with the sinking of computing power, user mobility brings new challenges: since it is usually unstable, services should be dynamically migrated between multiple edge servers to maintain service performance, that is, user-perceived latency. Considering that Mobile Edge Computing is a highly distributed computing environment and it is difficult to synchronize information between servers, in order to ensure the real-time performance of the migration strategy, a virtual machine migration strategy based on Multi-Agent Deep Reinforcement Learning is proposed in this paper. The method of centralized training and distributed execution is adopted, that is, the transfer action is guided by the global information during training, and only the local observation information is needed to obtain the transfer action. Compared with the centralized control method, the proposed method alleviates communication bottleneck. Compared with other distributed control methods, this method only needs local information, does not need communication between servers, and speeds up the perception of the current environment. Migration strategies can be generated faster. Simulation results show that the proposed strategy is better than the contrast strategy in terms of convergence and energy consumption.
               
Click one of the above tabs to view related content.