LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Optimal computing resource allocation algorithm in cloud computing based on hybrid differential parallel scheduling

Photo by lukaszlada from unsplash

In order to improve the ability of resource allocation and scheduling in cloud computing, optimize resource allocation and improve the efficiency of cloud computing, an optimal computing resource allocation algorithm… Click to show full abstract

In order to improve the ability of resource allocation and scheduling in cloud computing, optimize resource allocation and improve the efficiency of cloud computing, an optimal computing resource allocation algorithm in cloud computing based on hybrid differential parallel scheduling is proposed. In this algorithm, the models of data structure and gird structure of computing resource allocation in cloud computing are constructed, and the sample clustering analysis method of resource information flow is used to classify the attributes of computing resources; the sliding window of computing resource allocation is divided into multiple sub-windows; characteristic quantities associated with computing resource allocation attributes are selected in neighbor samples as standard vector sets for adaptive pairing; the computing resources in cloud computing are done with singular value decomposition and the resource allocation is transformed into the least square problem; the hybrid differential parallel computing method is used for optimal solution finding of resource scheduling vector set to prevent the allocation results from falling into local optimal solution, so as to improve the global convergence of resource allocation. The simulation results show that when the method proposed in this paper is used for resource allocation in clouding computing, the clustering performance is high and the convergence control ability to computing resources with different attributes is high; the allocation speedup can reach 3.67, which is improved by 14.65 and 7.43% respectively compared with that in the traditional HEFT algorithm and HCNF algorithm; when the number of allocate nodes is 100, the overhead is only 5.6, which is reduced by 14.56 and 8.33% than that in traditional HEFT algorithm and HCNF algorithm. So it shows that the proposed method has a higher practical application performance for its shorter execution time and lower overhead.

Keywords: resource allocation; resource; cloud computing; computing resource; allocation

Journal Title: Cluster Computing
Year Published: 2018

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.