A matheuristic approach based on a reduced two-stage Stochastic Integer Linear Programming (SILP) model is presented. The proposed approach is suitable for obtaining a policy constructed dynamically on the go… Click to show full abstract
A matheuristic approach based on a reduced two-stage Stochastic Integer Linear Programming (SILP) model is presented. The proposed approach is suitable for obtaining a policy constructed dynamically on the go during the rollout algorithm. The rollout algorithm is part of the Approximate Dynamic Programming (ADP) lookahead solution approach for a Markov Decision Processes (MDP) framed Multi-Depot Dynamic Vehicle Routing Problem with Stochastic Road Capacity (MDDVRPSRC). First, a Deterministic Multi-Depot VRP with Road Capacity (D-MDVRPRC) is presented. Then an extension, MDVRPSRC-2S, is presented as an offline two-stage SILP model of the MDDVRPSRC. These models are validated using small simulated instances with CPLEX. Next, two reduced versions of the MDVRPSRC-2S model (MDVRPSRC-2S1 and MDVRPSRC-2S2) are derived. They have a specific task in routing: replenishment and delivering supplies. These reduced models are to be utilised interchangeably depending on the capacity of the vehicle, and repeatedly during the execution of rollout in reinforcement learning. As a result, it is shown that a base policy consisting of an exact optimal decision at each decision epoch can be obtained constructively through these reduced two-stage stochastic integer linear programming models. The results obtained from the resulting rollout policy with CPLEX execution during rollout are also presented to validate the reduced model and the matheuristic algorithm. This approach is proposed as a simple implementation when performing rollout for the lookahead approach in ADP.
               
Click one of the above tabs to view related content.