Analytical performance models have been leveraged extensively to analyze and improve the performance and cost of various cloud computing services. However, in the case of serverless computing, which is projected… Click to show full abstract
Analytical performance models have been leveraged extensively to analyze and improve the performance and cost of various cloud computing services. However, in the case of serverless computing, which is projected to be the dominant form of cloud computing in the future, we have not seen analytical performance models to help with the analysis and optimization of such platforms. In this work, we propose an analytical performance model that captures the unique details of serverless computing platforms. The model can be leveraged to improve the quality of service and resource utilization and reduce the operational cost of serverless platforms. Also, the proposed performance model provides a framework that enables serverless platforms to become workload-aware and operate differently for different workloads to provide a better trade-off between the cost and performance depending on the user's preferences. The current serverless offerings require the user to have extensive knowledge of the internals of the platform to perform efficient deployments. Using the proposed analytical model, the provider can simplify the deployment process by calculating the performance metrics for users even before physical deployments. We validate the applicability and accuracy of the proposed model by extensive experimentation on AWS Lambda. We show that the proposed model can calculate essential performance metrics such as average response time, probability of cold start, and the average number of function instances in the steady-state. Also, we show how the performance model can be used to tune the serverless platform for each workload, which will result in better performance or lower cost without scarifying the other. The presented model assumes no non-realistic restrictions, so that it offers a high degree of fidelity while maintaining tractability at large scale.
               
Click one of the above tabs to view related content.