LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Reinforcement Learning for Production-Based Cognitive Models

Photo from wikipedia

Production-based cognitive models, such as Adaptive Control of Thought-Rational (ACT-R) or Soar agents, have been a popular tool in cognitive science to model sequential decision processes. While the models have… Click to show full abstract

Production-based cognitive models, such as Adaptive Control of Thought-Rational (ACT-R) or Soar agents, have been a popular tool in cognitive science to model sequential decision processes. While the models have been useful in articulating assumptions and predictions of various theories, they unfortunately require a significant amount of hand coding, both with respect to what building blocks cognitive processes should consist of and with respect to how these building blocks are selected and ordered in a sequential decision process. Hand coding of large, realistic models poses a challenge for modelers, and also makes it unclear whether the models can be learned and are thus cognitively plausible. The learnability issue is probably most starkly present in cognitive models of linguistic skills, since linguistic skills involve richly structured representations and highly complex rules. We investigate how reinforcement learning (RL) methods can be used to solve the production selection and production ordering problem in ACT-R. We focus on four algorithms from the Q learning family, tabular Q and three versions of deep Q networks (DQNs), as well as the ACT-R utility learning algorithm, which provides a baseline for the Q algorithms. We compare the performance of these five algorithms in a range of lexical decision (LD) tasks framed as sequential decision problems. We observe that, unlike the ACT-R baseline, the Q agents learn even the more complex LD tasks fairly well. However, tabular Q and DQNs show a trade-off between speed of learning, applicability to more complex tasks, and how noisy the learned rules are. This indicates that the ACT-R subsymbolic system for procedural memory could be improved by incorporating more insights from RL approaches, particularly the function-approximation-based ones, which learn and generalize effectively in complex, more realisticĀ tasks.

Keywords: decision; cognitive models; production; reinforcement learning; based cognitive; production based

Journal Title: Topics in cognitive science
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.