Abstract
Reinforcement learning (RL) algorithms have performed well in playing challenging board and video games. More and more studies focus on improving the generalization ability of RL algorithms. The General Video Game AI (GVGAI) Learning Competition aims to develop agents capable of learning to play different game levels that were unseen during training. This article summarizes the five years' GVGAI Learning Competition editions. At each edition, three new games were designed. The training and test levels were designed separately in the first three editions. Since 2020, three test levels of each game were generated by perturbing or combining two training levels. Then, we present a novel RL technique with dual-observation for general video game playing, assuming that it is more likely to observe similar local information in different levels rather than global information. Instead of directly inputting a single, raw pixel-based screenshot of the current game screen, our proposed general technique takes the encoded, transformed global, and local observations (LOs) of the game screen as two simultaneous inputs, aiming at learning local information for playing new levels. Our proposed technique is implemented with three state-of-the-art RL algorithms and tested on the game set of the 2020 GVGAI Learning Competition. Ablation studies show the outstanding performance of using encoded, transformed global, and LOs as input.
Original language | English (US) |
---|---|
Pages (from-to) | 202-216 |
Number of pages | 15 |
Journal | IEEE Transactions on Games |
Volume | 15 |
Issue number | 2 |
DOIs | |
State | Published - Jun 1 2023 |
Keywords
- Artificial intelligence
- Atari
- general video game artificial intelligence (GVGAI)
- general video game playing (GVGP)
- reinforcement learning (RL)
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Artificial Intelligence
- Electrical and Electronic Engineering