Existing studies demonstrate that comprehenders can predict semantic information during language comprehension. Most evidence comes from a highly constraining context, in which a specific word is likely to be predicted.… Click to show full abstract
Existing studies demonstrate that comprehenders can predict semantic information during language comprehension. Most evidence comes from a highly constraining context, in which a specific word is likely to be predicted. One question that has been investigated less is whether prediction can occur when prior context is less constraining for predicting specific words. Here, we aim to address this issue by examining the prediction of animacy features in low-constraining context, using electroencephalography (EEG), in combination with representational similarity analysis (RSA). In Chinese, a classifier follows a numeral and precedes a noun, and classifiers constrain animacy features of upcoming nouns. In the task, native Chinese Mandarin speakers were presented with either animate-constraining or inanimate-constraining classifiers followed by congruent or incongruent nouns. EEG amplitude analysis revealed an N400 effect for incongruent conditions, reflecting the difficulty of semantic integration when an incompatible noun is encountered. Critically, we quantified the similarity between patterns of neural activity following the classifiers. RSA results revealed that the similarity between patterns of neural activity following animate-constraining classifiers was greater than following inanimate-constraining classifiers, before the presentation of the nouns, reflecting pre-activation of animacy features of nouns. These findings provide evidence for the prediction of coarse-grained semantic feature of upcoming words.
               
Click one of the above tabs to view related content.