Vision and touch both support spatial information processing. These sensory systems also exhibit highly specific interactions in spatial perception, which may reflect multisensory representations that are learned through visuo-tactile (VT)… Click to show full abstract
Vision and touch both support spatial information processing. These sensory systems also exhibit highly specific interactions in spatial perception, which may reflect multisensory representations that are learned through visuo-tactile (VT) experiences. Recently, Wani and colleagues reported that task-irrelevant visual cues bias tactile perception, in a brightness-dependent manner, on a task requiring participants to detect unimanual and bimanual cues. Importantly, tactile performance remained spatially biased after VT exposure, even when no visual cues were presented. These effects on bimanual touch conceivably reflect cross-modal learning, but the neural substrates that are changed by VT experience are unclear. We previously described a neural network capable of simulating VT spatial interactions. Here, we exploited this model to test different hypotheses regarding potential network-level changes that may underlie the VT learning effects. Simulation results indicated that VT learning effects are inconsistent with plasticity restricted to unisensory visual and tactile hand representations. Similarly, VT learning effects were also inconsistent with changes restricted to the strength of inter-hemispheric inhibitory interactions. Instead, we found that both the hand representations and the inter-hemispheric inhibitory interactions need to be plastic to fully recapitulate VT learning effects. Our results imply that crossmodal learning of bimanual spatial perception involves multiple changes distributed over a VT processing cortical network.
               
Click one of the above tabs to view related content.