Recent functional magnetic resonance imaging (fMRI) studies have made significant progress in reconstructing perceived visual content, which advanced our understanding of the visual mechanism. However, reconstructing dynamic natural vision remains… Click to show full abstract
Recent functional magnetic resonance imaging (fMRI) studies have made significant progress in reconstructing perceived visual content, which advanced our understanding of the visual mechanism. However, reconstructing dynamic natural vision remains a challenge because of the limitation of the temporal resolution of fMRI. Here, we developed a novel fMRI-conditional video generative adversarial network (f-CVGAN) to reconstruct rapid video stimuli from evoked fMRI responses. In this model, we employed a generator to produce spatiotemporal reconstructions and employed two separate discriminators (spatial and temporal discriminators) for the assessment. We trained and tested the f-CVGAN on two publicly available video-fMRI datasets, and the model produced pixel-level reconstructions of 8 perceived video frames from each fMRI volume. Experimental results showed that the reconstructed videos were fMRI-related and captured important spatial and temporal information of the original stimuli. Moreover, we visualized the cortical importance map and found that the visual cortex is extensively involved in the reconstruction, whereas the low-level visual areas (V1/V2/V3/V4) showed the largest contribution. Our work suggests that slow blood oxygen level-dependent signals describe neural representations of the fast perceptual process that can be decoded in practice.
               
Click one of the above tabs to view related content.