Edge computing can provide high bandwidth and low-latency service for big data tasks by leveraging the edge side’s computing, storage, and network resources. With the development of microservice and docker… Click to show full abstract
Edge computing can provide high bandwidth and low-latency service for big data tasks by leveraging the edge side’s computing, storage, and network resources. With the development of microservice and docker technology, service providers can flexibly and dynamically cache microservice at the edge side to respond efficiently with limited resources. Automatically caching needed services on the nearest edge nodes and dynamically scheduling users’ requests can realize that computing power and software services flow with the users to provide continuous services. However, achieving the goal needs to overcome many challenges, such as the significant fluctuation of user devices’ requests at the edge side and the lack of collaboration among edge nodes. In this article, dynamic computing power scheduling and collaborative task scheduling among edge nodes are comprehensively developed. The problem is considered a multiobjective optimization problem, including sequentially minimizing the deadline missing rate of requests and the average task completion time. We propose an adaptive mechanism for dynamically collaborative computing power and task scheduling (ADCS) in the edge environment to solve this problem. It adopts the greedy decision method to schedule computing tasks to meet their deadline requirements. At the same time, it uses the best-fit method to adjust the computing resources according to the changes of users’ requests. The simulation results show that ADCS can decrease the deadline missing rate and reduce the average completion time. Compared with DSR and CoDSR, the deadline missing rate is reduced by 59.91% and 19.95%, respectively. The average completion time is decreased by 37.87% and 6.71%.
               
Click one of the above tabs to view related content.