TY - GEN
T1 - Learning Controllable Content Generators
AU - Earle, Sam
AU - Edwards, Maria
AU - Khalifa, Ahmed
AU - Bontrager, Philip
AU - Togelius, Julian
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - It has recently been shown that reinforcement learning can be used to train generators capable of producing high-quality game levels, with quality defined in terms of some user-specified heuristic. To ensure that these generators' output is sufficiently diverse (that is, not amounting to the reproduction of a single optimal level configuration), the generation process is constrained such that the initial seed results in some variance in the generator's output. However, this results in a loss of control over the generated content for the human user. We propose to train generators capable of producing controllably diverse output, by making them 'goal-aware.' To this end, we add conditional inputs representing how close a generator is to some heuristic, and also modify the reward mechanism to incorporate that value. Testing on multiple domains, we show that the resulting level generators are capable of exploring the space of possible levels in a targeted, controllable manner, producing levels of comparable quality as their goal-unaware counterparts, that are diverse along designer-specified dimensions.
AB - It has recently been shown that reinforcement learning can be used to train generators capable of producing high-quality game levels, with quality defined in terms of some user-specified heuristic. To ensure that these generators' output is sufficiently diverse (that is, not amounting to the reproduction of a single optimal level configuration), the generation process is constrained such that the initial seed results in some variance in the generator's output. However, this results in a loss of control over the generated content for the human user. We propose to train generators capable of producing controllably diverse output, by making them 'goal-aware.' To this end, we add conditional inputs representing how close a generator is to some heuristic, and also modify the reward mechanism to incorporate that value. Testing on multiple domains, we show that the resulting level generators are capable of exploring the space of possible levels in a targeted, controllable manner, producing levels of comparable quality as their goal-unaware counterparts, that are diverse along designer-specified dimensions.
KW - conditional generation
KW - game AI
KW - pcgrl
KW - procedural content generation
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85115702027&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85115702027&partnerID=8YFLogxK
U2 - 10.1109/CoG52621.2021.9619159
DO - 10.1109/CoG52621.2021.9619159
M3 - Conference contribution
AN - SCOPUS:85115702027
T3 - IEEE Conference on Computatonal Intelligence and Games, CIG
BT - 2021 IEEE Conference on Games, CoG 2021
PB - IEEE Computer Society
T2 - 2021 IEEE Conference on Games, CoG 2021
Y2 - 17 August 2021 through 20 August 2021
ER -