We study learning dynamics in a prototypical representative-agent forward-looking model in which agents’ beliefs are updated using linear learning algorithms. We show that learning in this model can generate long… Click to show full abstract
We study learning dynamics in a prototypical representative-agent forward-looking model in which agents’ beliefs are updated using linear learning algorithms. We show that learning in this model can generate long memory endogenously, without any persistence in the exogenous shocks, depending on the weights agents place on past observations when they update their beliefs, and on the magnitude of the feedback from expectations to the endogenous variable. This is distinctly different from the case of rational expectations, where the memory of the endogenous variable is determined exogenously.
               
Click one of the above tabs to view related content.