This paper investigates a ride-sharing vehicle dispatching and routing problem in ride-sharing autonomous mobility-on-demand systems. We present a new method that can optimize both operation cost and passenger quality of… Click to show full abstract
This paper investigates a ride-sharing vehicle dispatching and routing problem in ride-sharing autonomous mobility-on-demand systems. We present a new method that can optimize both operation cost and passenger quality of service in a global and farsighted view by leveraging the historical data. It comprises two parts, one for vehicle routing decision making and the other for request-vehicle assignment. In particular, the vehicle routing decision making procedure is formulated as a Markov decision process considering idle vehicle rebalancing, with properly designed states, actions, and rewards. By sampling the future requests according to the historical probability distribution, the look-ahead decision making is realized via a deep reinforcement learning framework, which is composed of a Convolutional Neural Network and a double deep Q-learning module. Then a request-vehicle assignment scheme is presented based on the learning value attained from vehicle routing. Satisfactory performances (e.g, service rate, average waiting time and travel distance) of the method are demonstrated by experimental results under various fleet sizes and different vehicle capacities.
               
Click one of the above tabs to view related content.