The use of machine learning to build subgrid parametrizations for climate models is receiving growing attention. State‐of‐the‐art strategies address the problem as a supervised learning task and optimize algorithms that… Click to show full abstract
The use of machine learning to build subgrid parametrizations for climate models is receiving growing attention. State‐of‐the‐art strategies address the problem as a supervised learning task and optimize algorithms that predict subgrid fluxes based on information from coarse resolution models. In practice, training data are generated from higher resolution numerical simulations transformed in order to mimic coarse resolution simulations. By essence, these strategies optimize subgrid parametrizations to meet so‐called a priori criteria. But the actual purpose of a subgrid parametrization is to obtain good performance in terms of a posteriori metrics which imply computing entire model trajectories. In this paper, we focus on the representation of energy backscatter in two‐dimensional quasi‐geostrophic turbulence and compare parametrizations obtained with different learning strategies at fixed computational complexity. We show that strategies based on a priori criteria yield parametrizations that tend to be unstable in direct simulations and describe how subgrid parametrizations can alternatively be trained end‐to‐end in order to meet a posteriori criteria. We illustrate that end‐to‐end learning strategies yield parametrizations that outperform known empirical and data‐driven schemes in terms of performance, stability, and ability to apply to different flow configurations. These results support the relevance of differentiable programming paradigms for climate models in the future.
               
Click one of the above tabs to view related content.