Maintaining a stable estimate of head direction requires both self motion (ideothetic) information and environmental (allothetic) anchoring. In unfamiliar or dark environments ideothetic drive can maintain a rough estimate of… Click to show full abstract
Maintaining a stable estimate of head direction requires both self motion (ideothetic) information and environmental (allothetic) anchoring. In unfamiliar or dark environments ideothetic drive can maintain a rough estimate of heading but is subject to inaccuracy, visual information is required to stabilise the head direction estimate. When learning to associate visual scenes with head angle, animals do not have access to the ‘ground truth’ of their head direction, and must use egocentrically derived imprecise head direction estimates. We use both discriminative and generative methods of visual processing to learn these associations without extracting explicit landmarks from a natural visual scene, finding all are sufficiently capable at providing corrective signal. Further, we present a spiking continuous attractor model of head direction (SNN), which when driven by ideothetic input is subject to drift. We show that head direction predictions made by the chosen model-free visual learning algorithms can correct for drift, even when trained on a small training set of estimated head angles self-generated by the SNN. We validate this model against experimental work by reproducing cue rotation experiments which demonstrate visual control of the head direction signal.
               
Click one of the above tabs to view related content.