This article aims to address the wind-farm power tracking problem, which requires the farm's total power generation to track time-varying power references and, therefore, allows the wind farm to participate… Click to show full abstract
This article aims to address the wind-farm power tracking problem, which requires the farm's total power generation to track time-varying power references and, therefore, allows the wind farm to participate in ancillary services such as frequency regulation. A novel preview-based robust deep reinforcement learning (PR-DRL) method is proposed to handle such tasks which are subject to uncertain environmental conditions and strong aerodynamic interactions among wind turbines. To our knowledge, this is for the first time that a data-driven model-free solution is developed for wind-farm power tracking. Particularly, reference signals are treated as preview information and embedded in the system as specially designed augmented states. The control problem is then transformed into a zero-sum game to quantify the influence of unknown wind conditions and future reference signals. Built upon the $H_\infty$ control theory, the proposed PR-DRL method can successfully approximate the resulting zero-sum game's solution and achieve wind-farm power tracking. Time-series measurements and long short-term memory networks are employed in our DRL structure to handle the non-Markovian property induced by the time-delayed feature of aerodynamic interactions. Tests based on a dynamic wind-farm simulator demonstrate the effectiveness of the proposed PR-DRL wind-farm control strategy.
               
Click one of the above tabs to view related content.