The human auditory system excels at detecting patterns needed for processing speech and music. According to predictive coding, the brain predicts incoming sounds, compares predictions to sensory input and generates… Click to show full abstract
The human auditory system excels at detecting patterns needed for processing speech and music. According to predictive coding, the brain predicts incoming sounds, compares predictions to sensory input and generates a prediction error whenever a mismatch between the prediction and sensory input occurs. Predictive coding can be indexed in electroencephalography (EEG) with the mismatch negativity (MMN) and P3a, two components of eventārelated potentials (ERP) that are elicited by infrequent deviant sounds (e.g., differing in pitch, duration and loudness) in a stream of frequent sounds. If these components reflect prediction error, they should also be elicited by omitting an expected sound, but few studies have examined this. We compared ERPs elicited by infrequent randomly occurring omissions (unexpected silences) in tone sequences presented at two tones per second to ERPs elicited by frequent, regularly occurring omissions (expected silences) within a sequence of tones presented at one tone per second. We found that unexpected silences elicited significant MMN and P3a, although the magnitude of these components was quite small and variable. These results provide evidence for hierarchical predictive coding, indicating that the brain predicts silences and sounds.
               
Click one of the above tabs to view related content.