Vulnerability of deep reinforcement learning to policy induction attacks

Vahid Behzadan, Arslan Munir

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep learning classifiers are known to be inherently vulnerable to manipulation by intentionally perturbed inputs, named adversarial examples. In this work, we establish that reinforcement learning techniques based on Deep Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and verify the transferability of adversarial examples across different DQN models. Furthermore, we present a novel class of attacks based on this vulnerability that enable policy manipulation and induction in the learning process of DQNs. We propose an attack mechanism that exploits the transferability of adversarial examples to implement policy induction attacks on DQNs, and demonstrate its efficacy and impact through experimental study of a game-learning scenario.

Original languageEnglish (US)
Title of host publicationMachine Learning and Data Mining in Pattern Recognition - 13th International Conference, MLDM 2017, Proceedings
EditorsPetra Perner
PublisherSpringer Verlag
Pages262-275
Number of pages14
ISBN (Print)9783319624150
DOIs
StatePublished - 2017
Event13th International Conference on Machine Learning and Data Mining in Pattern Recognition, MLDM 2017 - New York, United States
Duration: Jul 15 2017Jul 20 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10358 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference13th International Conference on Machine Learning and Data Mining in Pattern Recognition, MLDM 2017
Country/TerritoryUnited States
CityNew York
Period7/15/177/20/17

Keywords

  • Adversarial examples
  • Deep Q-Learning
  • Manipulation
  • Policy induction
  • Reinforcement learning
  • Vulnerability

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Vulnerability of deep reinforcement learning to policy induction attacks'. Together they form a unique fingerprint.

Cite this