TY - GEN
T1 - Playing atari with six neurons
AU - Cuccu, Giuseppe
AU - Togelius, Julian
AU - Cudre-Mauroux, Philippe
N1 - Publisher Copyright:
© 2019 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org) Ail rights reserved.
PY - 2019
Y1 - 2019
N2 - Deep reinforcement learning, applied to vision-based problems like Atari games, maps pixels directly to actions; internally, the deep neural network bears the responsibility of both extracting useful information and making decisions based on it By separating the image processing from decision-making, one could better understand the complexity of each task, as well as potentially find smaller policy representations that are easier for humans to understand and may generalize better To this end, we propose a new method for learning policies and compact state representations separately but simultaneously for policy approximation in reinforcement learning State representations are generated by an encoder based on two novel algorithms: Increasing Dictionary Vector Quantization makes the encoder capable of growing its dictionary size over time, to address new observations as they appear in an open-ended online-learning context; Direct Residuals Sparse Coding encodes observations by disregarding reconstruction error minimization, and aiming instead for highest information inclusion The encoder autonomously selects observations online to train on, in order to maximize code sparsity As the dictionary size increases, the encoder produces increasingly larger inputs for the neural network: This is addressed by a variation of the Exponential Natural Evolution Strategies algorithm which adapts its probability distribution dimensionality along the run We test our system on a selection of Atari games using tiny neural networks of only 6 to 18 neurons (depending on the game's controls) These are still capable of achieving results comparable-and occasionally superior-to state-of-the-art techniques which use two orders of magnitude more neurons.
AB - Deep reinforcement learning, applied to vision-based problems like Atari games, maps pixels directly to actions; internally, the deep neural network bears the responsibility of both extracting useful information and making decisions based on it By separating the image processing from decision-making, one could better understand the complexity of each task, as well as potentially find smaller policy representations that are easier for humans to understand and may generalize better To this end, we propose a new method for learning policies and compact state representations separately but simultaneously for policy approximation in reinforcement learning State representations are generated by an encoder based on two novel algorithms: Increasing Dictionary Vector Quantization makes the encoder capable of growing its dictionary size over time, to address new observations as they appear in an open-ended online-learning context; Direct Residuals Sparse Coding encodes observations by disregarding reconstruction error minimization, and aiming instead for highest information inclusion The encoder autonomously selects observations online to train on, in order to maximize code sparsity As the dictionary size increases, the encoder produces increasingly larger inputs for the neural network: This is addressed by a variation of the Exponential Natural Evolution Strategies algorithm which adapts its probability distribution dimensionality along the run We test our system on a selection of Atari games using tiny neural networks of only 6 to 18 neurons (depending on the game's controls) These are still capable of achieving results comparable-and occasionally superior-to state-of-the-art techniques which use two orders of magnitude more neurons.
KW - Evolutionary algorithms
KW - Game playing
KW - Learning agent capabilities
KW - Neuroevolution
UR - http://www.scopus.com/inward/record.url?scp=85072857773&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85072857773&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85072857773
T3 - Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
SP - 998
EP - 1006
BT - 18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
PB - International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
T2 - 18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019
Y2 - 13 May 2019 through 17 May 2019
ER -