Many novel IoT-based applications demand low latency, large compute resources, and high privacy. These requirements have motivated the emergence of fog and edge computing to complement the low-privacy and high-latency… Click to show full abstract
Many novel IoT-based applications demand low latency, large compute resources, and high privacy. These requirements have motivated the emergence of fog and edge computing to complement the low-privacy and high-latency cloud. The intention behind Fog computing is to place computational servers closer to the user, typically within the city’s vicinity, to reduce latency. However, because of the high deployment cost of these servers at scale, and unreliable network infrastructures in many countries or areas, edge computing was proposed. Edge computing advocates leveraging compute resources, typically 0-hops away, on distributed ensembles of colocated devices called FemtoClouds. In this paper, we propose MAESTRO, a system that enables users to offload computational jobs to multiple FemtoClouds in their immediate vicinity. For MAESTRO, we build an integrated architecture that includes two new scheduling algorithms for assigning computing workloads to FemtoClouds. Each of our scheduling algorithms is designed to allow the system to operate more efficiently given poor or strong network infrastructures. We implement a full prototype of our system to assess its performance on our experimental testbed. The results indicate that in communication-challenged environments, our specialized scheduler outperforms state-of-the-art schedulers by up to 55%, while in communication-friendly environments our other specialized scheduler outperforms state-of-the-art schedulers by up to 67%.
               
Click one of the above tabs to view related content.