Reward contingencies are dynamic: outcomes that were valued at one point may subsequently lose value. Action selection in the face of dynamic reward associations requires several cognitive processes: registering a… Click to show full abstract
Reward contingencies are dynamic: outcomes that were valued at one point may subsequently lose value. Action selection in the face of dynamic reward associations requires several cognitive processes: registering a change in value of the primary reinforcer, adjusting the value of secondary reinforcers to reflect the new value of the primary reinforcer, and guiding action selection to optimal choices. Flexible responding has been evaluated extensively using reinforcer devaluation tasks. Performance on this task relies upon amygdala, Areas 11 and 13 of orbitofrontal cortex (OFC), and mediodorsal thalamus (MD). Differential contributions of amygdala and Areas 11 and 13 of OFC to specific sub-processes have been established, but the role of MD in these sub-processes is unknown. Pharmacological inactivation of the macaque MD during specific phases of this task revealed that MD is required for reward valuation and action selection. This profile is unique, differing from both amygdala and subregions of the OFC.
               
Click one of the above tabs to view related content.