Big data involves a large amount of data generation, storage, transfer from one place to another, and analysis to extract meaningful information. Information centric networking (ICN) is an infrastructure that… Click to show full abstract
Big data involves a large amount of data generation, storage, transfer from one place to another, and analysis to extract meaningful information. Information centric networking (ICN) is an infrastructure that transfers big data from one node to another node, and provides in-network caches. For software defined network-based ICN approach, a recently proposed centralized cache server architecture deploys single cache server based on path-stretch value. Despite the advantages of centralized cache in ICN, single cache server for a large network has scalability issue. Moreover, it only considers the path-stretch ratio for cache server deployment. Consequently, the traffic can not be reduced optimally. To resolve such issues, we propose to deploy multiple cache servers based on joint optimization of multiple parameters, namely: (i) closeness centrality; (ii) betweenness centrality; (iii) path-stretch values; and (iv) load balancing in the network. Our proposed approach first computes the locations and the number of cache servers based on the network topology information in an offline manner and the cache servers are placed at their corresponding locations in the network. Next, the controller installs flow rules at the switches such that the switches can forward the request for content to one of its nearest cache server. Upon reaching a content request, if the content request matches with the contents stored at the cache server, the content is delivered to the requesting node; otherwise, the request is forwarded to the controller. In the next step, controller computes the path such that the content provider first sends the content to the cache server. Finally, a copy of the content is forwarded to the requesting node. Simulation results confirmed that the proposed approach performs better in terms of traffic overhead and average end-to-end delay as compared to an existing state-of-the-art approach.
               
Click one of the above tabs to view related content.