Bayesian learning theory and evolutionary theory both formalize adaptive competition dynamics in possibly high‐dimensional, varying, and noisy environments. What do they have in common and how do they differ? In… Click to show full abstract
Bayesian learning theory and evolutionary theory both formalize adaptive competition dynamics in possibly high‐dimensional, varying, and noisy environments. What do they have in common and how do they differ? In this paper, we discuss structural and dynamical analogies and their limits, both at a computational and an algorithmic‐mechanical level. We point out mathematical equivalences between their basic dynamical equations, generalizing the isomorphism between Bayesian update and replicator dynamics. We discuss how these mechanisms provide analogous answers to the challenge of adapting to stochastically changing environments at multiple timescales. We elucidate an algorithmic equivalence between a sampling approximation, particle filters, and the Wright‐Fisher model of population genetics. These equivalences suggest that the frequency distribution of types in replicator populations optimally encodes regularities of a stochastic environment to predict future environments, without invoking the known mechanisms of multilevel selection and evolvability. A unified view of the theories of learning and evolution comes in sight.
               
Click one of the above tabs to view related content.