Abstract
We describe two experiments designed to test whether the ease with which people can label features of the environment influences human reinforcement learning. The first experiment presents evidence that people are more efficient at learning to discern relevant features of a task when candidate features are easier to name. The second experiment shows that learning what action to take in a given state is easier when states have more readily nameable verbal labels, an effect that was especially pronounced in environments with more states. The interaction between CLIP, a state-of-the-art AI model trained to map images to natural language concepts, and established human RL algorithms, captures the key effects without the need to specify condition-specific parameters. These results suggest a possible role for language information in how humans represent the environment when learning from trial and error.
Original language | English (US) |
---|---|
Pages | 3564-3570 |
Number of pages | 7 |
State | Published - 2022 |
Event | 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 - Toronto, Canada Duration: Jul 27 2022 → Jul 30 2022 |
Conference
Conference | 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 |
---|---|
Country/Territory | Canada |
City | Toronto |
Period | 7/27/22 → 7/30/22 |
Keywords
- language
- reinforcement learning
- state
- task representation
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Science Applications
- Human-Computer Interaction
- Cognitive Neuroscience