First order constrained optimization in policy space

Yiming Zhang, Quan Vuong, Keith W. Ross

    Research output: Contribution to journalConference articlepeer-review

    Abstract

    In reinforcement learning, an agent attempts to learn high-performing behaviors through interacting with the environment, such behaviors are often quantified in the form of a reward function. However some aspects of behavior—such as ones which are deemed unsafe and to be avoided—are best captured through constraints. We propose a novel approach called First Order Constrained Optimization in Policy Space (FOCOPS) which maximizes an agent’s overall reward while ensuring the agent satisfies a set of cost constraints. Using data generated from the current policy, FOCOPS first finds the optimal update policy by solving a constrained optimization problem in the nonparameterized policy space. FOCOPS then projects the update policy back into the parametric policy space. Our approach has an approximate upper bound for worst-case constraint violation throughout training and is first-order in nature therefore simple to implement. We provide empirical evidence that our simple approach achieves better performance on a set of constrained robotics locomotive tasks.

    Original languageEnglish (US)
    JournalAdvances in Neural Information Processing Systems
    Volume2020-December
    StatePublished - 2020
    Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
    Duration: Dec 6 2020Dec 12 2020

    ASJC Scopus subject areas

    • Computer Networks and Communications
    • Information Systems
    • Signal Processing

    Fingerprint

    Dive into the research topics of 'First order constrained optimization in policy space'. Together they form a unique fingerprint.

    Cite this