A game-theoretic defense against data poisoning attacks in distributed support vector machines

Rui Zhang, Quanyan Zhu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

With a large number of sensors and control units in networked systems, distributed support vector machines (DSVMs) play a fundamental role in scalable and efficient multi-sensor classification and prediction tasks. However, DSVMs are vulnerable to adversaries who can modify and generate data to deceive the system to misclassification and misprediction. This work aims to design defense strategies for DSVM learner against a potential adversary. We use a game-theoretic framework to capture the conflicting interests between the DSVM learner and the attacker. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environments, and enhancing the resilience of the machine learning through dynamic distributed algorithms. We develop a secure and resilient DSVM algorithm with rejection method, and show its resiliency against adversary with numerical experiments.

Original languageEnglish (US)
Title of host publication2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4582-4587
Number of pages6
Volume2018-January
ISBN (Electronic)9781509028733
DOIs
StatePublished - Jan 18 2018
Event56th IEEE Annual Conference on Decision and Control, CDC 2017 - Melbourne, Australia
Duration: Dec 12 2017Dec 15 2017

Other

Other56th IEEE Annual Conference on Decision and Control, CDC 2017
CountryAustralia
CityMelbourne
Period12/12/1712/15/17

Fingerprint

Support vector machines
Support Vector Machine
Attack
Game
Rejection Method
Resiliency
Misclassification
Dynamic Algorithms
Sensors
Resilience
Distributed Algorithms
Parallel algorithms
Nash Equilibrium
Learning algorithms
Learning systems
Distributed Systems
Learning Algorithm
Machine Learning
Numerical Experiment
Support vector machine

ASJC Scopus subject areas

  • Decision Sciences (miscellaneous)
  • Industrial and Manufacturing Engineering
  • Control and Optimization

Cite this

Zhang, R., & Zhu, Q. (2018). A game-theoretic defense against data poisoning attacks in distributed support vector machines. In 2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017 (Vol. 2018-January, pp. 4582-4587). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CDC.2017.8264336

A game-theoretic defense against data poisoning attacks in distributed support vector machines. / Zhang, Rui; Zhu, Quanyan.

2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. p. 4582-4587.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zhang, R & Zhu, Q 2018, A game-theoretic defense against data poisoning attacks in distributed support vector machines. in 2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017. vol. 2018-January, Institute of Electrical and Electronics Engineers Inc., pp. 4582-4587, 56th IEEE Annual Conference on Decision and Control, CDC 2017, Melbourne, Australia, 12/12/17. https://doi.org/10.1109/CDC.2017.8264336
Zhang R, Zhu Q. A game-theoretic defense against data poisoning attacks in distributed support vector machines. In 2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017. Vol. 2018-January. Institute of Electrical and Electronics Engineers Inc. 2018. p. 4582-4587 https://doi.org/10.1109/CDC.2017.8264336
Zhang, Rui ; Zhu, Quanyan. / A game-theoretic defense against data poisoning attacks in distributed support vector machines. 2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. pp. 4582-4587
@inproceedings{1a8348c9c6574f3781e8e8124eb9e0ec,
title = "A game-theoretic defense against data poisoning attacks in distributed support vector machines",
abstract = "With a large number of sensors and control units in networked systems, distributed support vector machines (DSVMs) play a fundamental role in scalable and efficient multi-sensor classification and prediction tasks. However, DSVMs are vulnerable to adversaries who can modify and generate data to deceive the system to misclassification and misprediction. This work aims to design defense strategies for DSVM learner against a potential adversary. We use a game-theoretic framework to capture the conflicting interests between the DSVM learner and the attacker. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environments, and enhancing the resilience of the machine learning through dynamic distributed algorithms. We develop a secure and resilient DSVM algorithm with rejection method, and show its resiliency against adversary with numerical experiments.",
author = "Rui Zhang and Quanyan Zhu",
year = "2018",
month = "1",
day = "18",
doi = "10.1109/CDC.2017.8264336",
language = "English (US)",
volume = "2018-January",
pages = "4582--4587",
booktitle = "2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - A game-theoretic defense against data poisoning attacks in distributed support vector machines

AU - Zhang, Rui

AU - Zhu, Quanyan

PY - 2018/1/18

Y1 - 2018/1/18

N2 - With a large number of sensors and control units in networked systems, distributed support vector machines (DSVMs) play a fundamental role in scalable and efficient multi-sensor classification and prediction tasks. However, DSVMs are vulnerable to adversaries who can modify and generate data to deceive the system to misclassification and misprediction. This work aims to design defense strategies for DSVM learner against a potential adversary. We use a game-theoretic framework to capture the conflicting interests between the DSVM learner and the attacker. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environments, and enhancing the resilience of the machine learning through dynamic distributed algorithms. We develop a secure and resilient DSVM algorithm with rejection method, and show its resiliency against adversary with numerical experiments.

AB - With a large number of sensors and control units in networked systems, distributed support vector machines (DSVMs) play a fundamental role in scalable and efficient multi-sensor classification and prediction tasks. However, DSVMs are vulnerable to adversaries who can modify and generate data to deceive the system to misclassification and misprediction. This work aims to design defense strategies for DSVM learner against a potential adversary. We use a game-theoretic framework to capture the conflicting interests between the DSVM learner and the attacker. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environments, and enhancing the resilience of the machine learning through dynamic distributed algorithms. We develop a secure and resilient DSVM algorithm with rejection method, and show its resiliency against adversary with numerical experiments.

UR - http://www.scopus.com/inward/record.url?scp=85037166233&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85037166233&partnerID=8YFLogxK

U2 - 10.1109/CDC.2017.8264336

DO - 10.1109/CDC.2017.8264336

M3 - Conference contribution

VL - 2018-January

SP - 4582

EP - 4587

BT - 2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -