Previous studies have shown that the brain generates expectations based on scenes, which affects facial expression recognition. However, although facial expressions are known to interact with perception, the mechanism underlying… Click to show full abstract
Previous studies have shown that the brain generates expectations based on scenes, which affects facial expression recognition. However, although facial expressions are known to interact with perception, the mechanism underlying this interaction remains poorly understood. Here, we used frequency labeling and decoding techniques to reveal the effects of scene-based expectation on the amplitude and representational strength of neural activity. We also reduced the relative reliability between expectation and sensory input by blurring facial expressions to further investigate the effects of this relative reliability on the pattern of neural activation and representation. Participants viewed emotional changes in unblurred or blurred facial expressions, which flickered at a rate of 6 Hz within a scene. We found that facial expressions that were congruent with the emotional significance of the scene elicited a larger steady-state visual evoked potential amplitude than did facial expressions that were incongruent with the emotional significance of a scene, in both unblurred and blurred conditions. We also found that expected facial expression representations were stronger than unexpected representations during the unblurred condition. In the blurred condition, unexpected representations were stronger than expected representations. Taken together, these results suggested that facial expression processing in the visual cortex is modulated by top-down signals. The relative reliability of expectation and sensory input moderated the influence of a scene on facial expression representation. Furthermore, our study showed that neural activation amplitudes did not correspond to representational strength.
               
Click one of the above tabs to view related content.