Multipath TCP (MPTCP) has been standardized by the IETF as an extension of conventional TCP and it allows the system to utilize multiple paths simultaneously, which can aggregate bandwidth to… Click to show full abstract
Multipath TCP (MPTCP) has been standardized by the IETF as an extension of conventional TCP and it allows the system to utilize multiple paths simultaneously, which can aggregate bandwidth to improve network throughput. However, MPTCP needs to open multiple interfaces at the same time, which makes MPTCP consume more energy to maintain multiple interface connections. Thus, how to manage subflows with the MPTCP's scheduling system to determine which paths should be used for data transmission is of critical importance to reduce energy consumption and ensure network throughput. Due to the path heterogeneity and random packet losses in wireless networks, existing scheduling systems, and selecting paths based on the path's delay or energy cost, may suffer from performance degradation. In this article, we propose a reinforcement learning-based multipath scheduler called MPTCP-RL to determine the optimal path set for different flows. MPTCP-RL adopts deep reinforcement learning as well as MPTCP transmission model to manage path usage among multiple connections to make sure that the sender can adaptively select the optimal path set for a certain application according to the current network environment. MPTCP-RL is an asynchronous reinforcement learning framework, which separates the processes of offline training and online decision to ensure that the learning process will not introduce extra delay and overhead on the decision making process in MPTCP path management. The extensive experimental results show that MPTCP-RL can improve the aggregate throughput and reduce energy consumption significantly compared to the state-of-the-art mechanisms in a variety of network scenarios.
               
Click one of the above tabs to view related content.