Abstract Dynamic energy dispatch is an integral part of the operation optimization of integrated energy systems (IESs). Most existing dynamic dispatch schemes depend heavily on explicit forecast or mathematical models… Click to show full abstract
Abstract Dynamic energy dispatch is an integral part of the operation optimization of integrated energy systems (IESs). Most existing dynamic dispatch schemes depend heavily on explicit forecast or mathematical models of the future uncertainties. Due to the randomness of renewable energy generation and energy demands, these approaches are limited by the accuracy of forecasting or model. A novel model-free dynamic dispatch strategy for IES based on improved deep reinforcement learning (DRL) is proposed to solve the problem. The IES dynamic dispatch problem is formulated as a Markov decision process (MDP), in which the uncertainties of renewable generation, electric load and heat load are considered. For solving the MDP, an improved deep deterministic policy gradient (DDPG) algorithm using prioritized experience replay mechanism and L2 regularization is developed, so as to improve the policy quality and learning efficiency of the dispatch strategy. The proposed approach does not require any forecast information or distribution knowledge, and can adaptively respond to the stochastic fluctuations of the supply and demands. Simulation results show the proposed dispatch strategy has faster convergence and lower operating costs than original DDPG-based strategy. In addition, the advantages of the proposed approach in terms of cost-effectiveness and stochastic environmental adaptation are validated.
               
Click one of the above tabs to view related content.