Security of distributed machine learning: A game-theoretic approach to design secure DSVM

Rui Zhang, Quanyan Zhu

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Distributed machine learning algorithms play a significant role in processing massive data sets over large networks. However, the increasing reliance on machine learning on information and communication technologies (ICTs) makes it inherently vulnerable to cyber threats. This work aims to develop secure distributed algorithms to protect the learning from data poisoning and network attacks. We establish a game-theoretic framework to capture the conflicting goals of a learner who uses distributed support vector machines (SVMs) and an attacker who is capable of modifying training data and labels. We develop a fully distributed and iterative algorithm to capture real-time reactions of the learner at each node to adversarial behaviors. The numerical results show that distributed SVM is prone to fail in different types of attacks, and their impact has a strong dependence on the network structure and attack capabilities.

Original languageEnglish (US)
Title of host publicationAdversary-Aware Learning Techniques and Trends in Cybersecurity
PublisherSpringer International Publishing
Pages17-36
Number of pages20
ISBN (Electronic)9783030556921
ISBN (Print)9783030556914
DOIs
StatePublished - Jan 22 2021

Keywords

  • Adversarial distributed machine learning
  • Data-poisoning attack
  • Distributed support vector machines
  • Game theory
  • Label-flipping attack
  • Network-type attacks

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Security of distributed machine learning: A game-theoretic approach to design secure DSVM'. Together they form a unique fingerprint.

Cite this