Autoencoder-augmented neuroevolution for visual doom playing

Samuel Alvernaz, Julian Togelius

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Neuroevolution has proven effective at many re-inforcement learning tasks, including tasks with incomplete information and delayed rewards, but does not seem to scale well to high-dimensional controller representations, which are needed for tasks where the input is raw pixel data. We propose a novel method where we train an autoencoder to create a comparatively low-dimensional representation of the environment observation, and then use CMA-ES to train neural network controllers acting on this input data. As the behavior of the agent changes the nature of the input data, the autoencoder training progresses throughout evolution. We test this method in the VizDoom environment built on the classic FPS Doom, where it performs well on a health-pack gathering task.

    Original languageEnglish (US)
    Title of host publication2017 IEEE Conference on Computational Intelligence and Games, CIG 2017
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages1-8
    Number of pages8
    ISBN (Electronic)9781538632338
    DOIs
    StatePublished - Oct 23 2017
    Event2017 IEEE Conference on Computational Intelligence and Games, CIG 2017 - New York, United States
    Duration: Aug 22 2017Aug 25 2017

    Publication series

    Name2017 IEEE Conference on Computational Intelligence and Games, CIG 2017

    Other

    Other2017 IEEE Conference on Computational Intelligence and Games, CIG 2017
    Country/TerritoryUnited States
    CityNew York
    Period8/22/178/25/17

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Human-Computer Interaction
    • Media Technology

    Fingerprint

    Dive into the research topics of 'Autoencoder-augmented neuroevolution for visual doom playing'. Together they form a unique fingerprint.

    Cite this