Recently, there has been a focus on drawing progress on representation learning to obtain more identifiable and interpretable latent representations for spike trains, which helps analyze neural population activity and… Click to show full abstract
Recently, there has been a focus on drawing progress on representation learning to obtain more identifiable and interpretable latent representations for spike trains, which helps analyze neural population activity and understand neural mechanisms. Most existing deep generative models adopt carefully designed constraints to capture meaningful latent representations. For neural data involving navigation in cognitive space, based on insights from studies on cognitive maps, we argue that the good representations should reflect such directional nature. Due to manifold mismatch, models utilizing the Euclidean space learn a distorted geometric structure that is difficult to interpret. In the present work, we explore capturing the directional nature in a simpler yet more efficient way by introducing hyperspherical neural latent variable models (SNLVM). SNLVM is an improved deep latent variable model modeling neural activity and behavioral variables simultaneously with hyperspherical latent space. It bridges cognitive maps and latent variable models. We conduct experiments on modeling a static unidirectional task. The results show that while SNLVM has competitive performance, a hyperspherical prior naturally provides more informative and significantly better latent structures that can be interpreted as spatial cognitive maps.
               
Click one of the above tabs to view related content.