TY - GEN
T1 - Playing atari with six neurons (extended abstract)
AU - Cuccu, Giuseppe
AU - Togelius, Julian
AU - Cudré-Mauroux, Philippe
N1 - Funding Information:
This work received support from the Swiss NSF under grant number 407540 167320.
Publisher Copyright:
© 2020 Inst. Sci. inf., Univ. Defence in Belgrade. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Deep reinforcement learning applied to vision-based problems like Atari games maps pixels directly to actions; internally, the deep neural network bears the responsibility of both extracting useful information and making decisions based on it. By separating image processing from decision-making, one could better understand the complexity of each task, as well as potentially find smaller policy representations that are easier for humans to understand and may generalize better. To this end, we propose a new method for learning policies and compact state representations separately but simultaneously for policy approximation in reinforcement learning. State representations are generated by an encoder based on two novel algorithms: Increasing Dictionary Vector Quantization makes the encoder capable of growing its dictionary size over time, to address new observations, and Direct Residuals Sparse Coding encodes observations by aiming for highest information inclusion. We test our system on a selection of Atari games using tiny neural networks of only 6 to 18 neurons (depending on the game's controls). These are still capable of achieving results comparable-and occasionally superior-to state-of-the-art techniques which use two orders of magnitude more neurons.
AB - Deep reinforcement learning applied to vision-based problems like Atari games maps pixels directly to actions; internally, the deep neural network bears the responsibility of both extracting useful information and making decisions based on it. By separating image processing from decision-making, one could better understand the complexity of each task, as well as potentially find smaller policy representations that are easier for humans to understand and may generalize better. To this end, we propose a new method for learning policies and compact state representations separately but simultaneously for policy approximation in reinforcement learning. State representations are generated by an encoder based on two novel algorithms: Increasing Dictionary Vector Quantization makes the encoder capable of growing its dictionary size over time, to address new observations, and Direct Residuals Sparse Coding encodes observations by aiming for highest information inclusion. We test our system on a selection of Atari games using tiny neural networks of only 6 to 18 neurons (depending on the game's controls). These are still capable of achieving results comparable-and occasionally superior-to state-of-the-art techniques which use two orders of magnitude more neurons.
UR - http://www.scopus.com/inward/record.url?scp=85097356469&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097356469&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85097356469
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 4711
EP - 4715
BT - Proceedings of the 29th International Joint Conference on Artificial Intelligence, IJCAI 2020
A2 - Bessiere, Christian
PB - International Joint Conferences on Artificial Intelligence
T2 - 29th International Joint Conference on Artificial Intelligence, IJCAI 2020
Y2 - 1 January 2021
ER -