TY - GEN
T1 - Deep interactive evolution
AU - Bontrager, Philip
AU - Lin, Wending
AU - Togelius, Julian
AU - Risi, Sebastian
N1 - Publisher Copyright:
© Springer International Publishing AG, part of Springer Nature 2018.
PY - 2018
Y1 - 2018
N2 - This paper describes an approach that combines generative adversarial networks (GANs) with interactive evolutionary computation (IEC). While GANs can be trained to produce lifelike images, they are normally sampled randomly from the learned distribution, providing limited control over the resulting output. On the other hand, interactive evolution has shown promise in creating various artifacts such as images, music and 3D objects, but traditionally relies on a hand-designed evolvable representation of the target domain. The main insight in this paper is that a GAN trained on a specific target domain can act as a compact and robust genotype-to-phenotype mapping (i.e. most produced phenotypes do resemble valid domain artifacts). Once such a GAN is trained, the latent vector given as input to the GAN’s generator network can be put under evolutionary control, allowing controllable and high-quality image generation. In this paper, we demonstrate the advantage of this novel approach through a user study in which participants were able to evolve images that strongly resemble specific target images.
AB - This paper describes an approach that combines generative adversarial networks (GANs) with interactive evolutionary computation (IEC). While GANs can be trained to produce lifelike images, they are normally sampled randomly from the learned distribution, providing limited control over the resulting output. On the other hand, interactive evolution has shown promise in creating various artifacts such as images, music and 3D objects, but traditionally relies on a hand-designed evolvable representation of the target domain. The main insight in this paper is that a GAN trained on a specific target domain can act as a compact and robust genotype-to-phenotype mapping (i.e. most produced phenotypes do resemble valid domain artifacts). Once such a GAN is trained, the latent vector given as input to the GAN’s generator network can be put under evolutionary control, allowing controllable and high-quality image generation. In this paper, we demonstrate the advantage of this novel approach through a user study in which participants were able to evolve images that strongly resemble specific target images.
UR - http://www.scopus.com/inward/record.url?scp=85044656652&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85044656652&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-77583-8_18
DO - 10.1007/978-3-319-77583-8_18
M3 - Conference contribution
AN - SCOPUS:85044656652
SN - 9783319775821
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 267
EP - 282
BT - Computational Intelligence in Music, Sound, Art and Design - 7th International Conference, EvoMUSART 2018, Proceedings
A2 - Ekart, Aniko
A2 - Liapis, Antonios
A2 - Romero Cardalda, Juan Jesus
PB - Springer Verlag
T2 - 7th International Conference on Computational Intelligence in Music, Sound, Art and Design, EvoMUSART 2018
Y2 - 4 April 2018 through 6 April 2018
ER -