TY - GEN
T1 - Evolving personas for player decision modeling
AU - Holmgard, Christoffer
AU - Liapis, Antonios
AU - Togelius, Julian
AU - Yannakakis, Georgios N.
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/10/21
Y1 - 2014/10/21
N2 - This paper explores how evolved game playing agents can be used to represent a priori defined archetypical ways of playing a test-bed game, as procedural personas. The end goal of such procedural personas is substituting players when authoring game content manually, procedurally, or both (in a mixed-initiative setting). Building on previous work, we compare the performance of newly evolved agents to agents trained via Q-learning as well as a number of baseline agents. Comparisons are performed on the grounds of game playing ability, generalizability, and conformity among agents. Finally, all agents' decision making styles are matched to the decision making styles of human players in order to investigate whether the different methods can yield agents who mimic or differ from human decision making in similar ways. The experiments performed in this paper conclude that agents developed from a priori defined objectives can express human decision making styles and that they are more generalizable and versatile than Q-learning and hand-crafted agents.
AB - This paper explores how evolved game playing agents can be used to represent a priori defined archetypical ways of playing a test-bed game, as procedural personas. The end goal of such procedural personas is substituting players when authoring game content manually, procedurally, or both (in a mixed-initiative setting). Building on previous work, we compare the performance of newly evolved agents to agents trained via Q-learning as well as a number of baseline agents. Comparisons are performed on the grounds of game playing ability, generalizability, and conformity among agents. Finally, all agents' decision making styles are matched to the decision making styles of human players in order to investigate whether the different methods can yield agents who mimic or differ from human decision making in similar ways. The experiments performed in this paper conclude that agents developed from a priori defined objectives can express human decision making styles and that they are more generalizable and versatile than Q-learning and hand-crafted agents.
UR - http://www.scopus.com/inward/record.url?scp=84910088081&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84910088081&partnerID=8YFLogxK
U2 - 10.1109/CIG.2014.6932911
DO - 10.1109/CIG.2014.6932911
M3 - Conference contribution
AN - SCOPUS:84910088081
T3 - IEEE Conference on Computatonal Intelligence and Games, CIG
BT - IEEE Conference on Computatonal Intelligence and Games, CIG
PB - IEEE Computer Society
T2 - 2014 IEEE Conference on Computational Intelligence and Games, CIG 2014
Y2 - 26 August 2014 through 29 August 2014
ER -