When an otherwise inconspicuous stimulus is learned to predict a reward, this stimulus will automatically capture visual attention. This learned attentional bias is not specific to the precise object previously… Click to show full abstract
When an otherwise inconspicuous stimulus is learned to predict a reward, this stimulus will automatically capture visual attention. This learned attentional bias is not specific to the precise object previously associated with reward, but can be observed for different stimuli that share a defining feature with the reward cue. Under certain circumstances, value-driven attentional biases can even transfer to new contexts in which the reward cues were not previously experienced, and can also be evident for different exemplars of a stimulus category, suggesting some degree of tolerance in the scope of the underlying bias. Whether a match to a reward-predictive feature is necessary to support value-driven attention, or whether similar-looking features also receive some degree of elevated priority following associative reward learning, remains an open question. Here, I examine the impact of learned associations between reward and red- and green-colored stimuli on the processing of other colors. The findings show that even though other colors experienced during training were non-predictive with respect to reward, the speed with which targets possessing these colors were identified in a subsequent test phase was affected by their similarity to the high-value color. Thus, value-driven attentional biases for stimulus features are imprecise, as would be predicted by a sensory gain model of value-driven attention.
               
Click one of the above tabs to view related content.