LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Data-Driven Coordinated Charging for Electric Vehicles With Continuous Charging Rates: A Deep Policy Gradient Approach

Photo from wikipedia

In this article, we consider a parking lot that manages the charging processes of its parked electric vehicles (EVs). Upon arrival, each EV requests a certain amount of energy. This… Click to show full abstract

In this article, we consider a parking lot that manages the charging processes of its parked electric vehicles (EVs). Upon arrival, each EV requests a certain amount of energy. This request should be fulfilled before the EV’s departure. It is of critical importance to coordinate the EVs’ charging rates to smooth out the load profile of the parking lot because inappropriate charging rates can lead to sharp spikes and fluctuations on the load profile, imposing negative effects on the power grid. Meanwhile, empirical studies show that many parking lots exhibit statistical patterns on EV dynamics. For example, the bulk of EVs arrives during rush hours. Therefore, in this article, we incorporate such patterns into charging rate coordination. Although the statistical patterns can be summarized from historical data, they are difficult to be analytically modeled. As a result, we adopt a model-free deep reinforcement learning approach. We also take the latest continuous charging rate control technology into consideration. The decision variables are thus continuous and a policy gradient algorithm is needed to perform reinforcement learning. Technically, we first formulate the problem as a Markov decision process (MDP) with unknown state transition probabilities. To further derive a deep policy gradient algorithm, the challenge lies in the inconsistent and state-dependent action space of the MDP model, due to the constraint to satisfy EVs’ energy demands before their scheduled departure. To tackle the challenge, we design a customized model for neural network training by extending the action space to be consistent and state independent, and revise the reward function to penalize the neural network output if it is beyond the action space of the original MDP model. With this customized model, we then develop a deep policy gradient algorithm based on the proximal policy gradient framework. Numerical results show that our algorithm outperforms the benchmarks.

Keywords: charging rates; deep policy; policy gradient; policy; model

Journal Title: IEEE Internet of Things Journal
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.