Abstract This work creates a framework for solving highly non-linear satellite formation control problems by using model-free policy optimisation deep reinforcement learning (DRL) methods. This work considers, believed to be… Click to show full abstract
Abstract This work creates a framework for solving highly non-linear satellite formation control problems by using model-free policy optimisation deep reinforcement learning (DRL) methods. This work considers, believed to be for the first time, DRL methods, such as advantage actor-critic method (A2C) and proximal policy optimisation (PPO), to solve the example satellite formation problem of propellantless planar phasing of multiple satellites. Three degree-of-freedom simulations, including a novel surrogate propagation model, are used to train the deep reinforcement learning agents. During training, the agents actuated their motion through cross-sectional area changes which altered the environmental accelerations acting on them. The DRL framework designed in this work successfully coordinated three spacecraft to achieve a propellantless planar phasing manoeuvre. This work has created a DRL framework that can be used to solve complex satellite formation flying problems, such as planar phasing of multiple satellites and in doing so provides key insights into achieving optimal and robust formation control using reinforcement learning.
               
Click one of the above tabs to view related content.