Distributed machine learning algorithms play a significant role in processing massive data sets over large networks. However, the increasing reliance on machine learning on information and communication technologies makes it inherently vulnerable to cyber threats. This work aims to develop secure distributed algorithms to protect the learning from adversaries. We establish a game-theoretic framework to capture the conflicting goals of a learner who uses distributed support vector machines (DSVM) and an attacker who is capable of flipping training labels. We develop a fully distributed and iterative algorithm to capture real-time reactions of the learner at each node to adversarial behaviors. The numerical results show that DSVM is vulnerable to attacks, and their impact has a strong dependence on the network topologies.