Abstract
Distributed support vector machines (DSVMs) have been developed to solve large-scale classification problems in networked systems with a large number of sensors and control units. However, the systems become more vulnerable, as detection and defense are increasingly difficult and expensive. This paper aims to develop secure and resilient DSVM algorithms under adversarial environments in which an attacker can manipulate the training data to achieve his objective. We establish a game-theoretic framework to capture the conflicting interests between an adversary and a set of distributed data processing units. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environments and enhancing the resilience of the machine learning through dynamic distributed learning algorithms. We prove that the convergence of the distributed algorithm is guaranteed without assumptions on the training data or network topologies. Numerical experiments are conducted to corroborate the results. We show that the network topology plays an important role in the security of DSVM. Networks with fewer nodes and higher average degrees are more secure. Moreover, a balanced network is found to be less vulnerable to attacks.
Original language | English (US) |
---|---|
Article number | 8307266 |
Pages (from-to) | 5512-5527 |
Number of pages | 16 |
Journal | IEEE transactions on neural networks and learning systems |
Volume | 29 |
Issue number | 11 |
DOIs | |
State | Published - Nov 2018 |
Keywords
- Adversarial machine learning
- distributed support vector machines (DSVMs)
- game theory
- networked systems
- resilience
- security
ASJC Scopus subject areas
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence