Significance The ability to link between a first-person experience and a global map is an important cognitive function. To investigate the underlying computations needed to complete this task, we used… Click to show full abstract
Significance The ability to link between a first-person experience and a global map is an important cognitive function. To investigate the underlying computations needed to complete this task, we used variational autoencoders to reconstruct the top-down images from a robot’s camera view, and vice versa. We observed that place-specific coding is more prevalent when linking a top-down view to a first-person view, and head direction selectivity is more prevalent in the other direction. In both cases, the system recovers from perturbations by population coding and instantaneous remapping. This modeling brings a fundamentally different approach to understanding transformations between perspectives and suggests testable predictions in the nervous system.
               
Click one of the above tabs to view related content.