TY - GEN
T1 - Consensus-based transfer linear support vector machines for decentralized multi-task multi-agent learning
AU - Zhang, Rui
AU - Zhu, Quanyan
PY - 2018/5/21
Y1 - 2018/5/21
N2 - Transfer learning has been developed to improve the performances of different but related tasks in machine learning. However, such processes become less efficient with the increase of the size of training data and the number of tasks. Moreover, privacy can be violated as some tasks may contain sensitive and private data, which are communicated between nodes and tasks. We propose a consensus-based distributed transfer learning framework, where several tasks aim to find the best linear support vector machine (SVM) classifiers in a distributed network. With alternating direction method of multipliers, tasks can achieve better classification accuracies more efficiently and privately, as each node and each task train with their own data, and only decision variables are transferred between different tasks and nodes. Numerical experiments on MNIST datasets show that the knowledge transferred from the source tasks can be used to decrease the risks of the target tasks that lack training data or have unbalanced training labels. We show that the risks of the target tasks in the nodes without the data of the source tasks can also be reduced using the information transferred from the nodes who contain the data of the source tasks. We also show that the target tasks can enter and leave in real-time without rerunning the whole algorithm.
AB - Transfer learning has been developed to improve the performances of different but related tasks in machine learning. However, such processes become less efficient with the increase of the size of training data and the number of tasks. Moreover, privacy can be violated as some tasks may contain sensitive and private data, which are communicated between nodes and tasks. We propose a consensus-based distributed transfer learning framework, where several tasks aim to find the best linear support vector machine (SVM) classifiers in a distributed network. With alternating direction method of multipliers, tasks can achieve better classification accuracies more efficiently and privately, as each node and each task train with their own data, and only decision variables are transferred between different tasks and nodes. Numerical experiments on MNIST datasets show that the knowledge transferred from the source tasks can be used to decrease the risks of the target tasks that lack training data or have unbalanced training labels. We show that the risks of the target tasks in the nodes without the data of the source tasks can also be reduced using the information transferred from the nodes who contain the data of the source tasks. We also show that the target tasks can enter and leave in real-time without rerunning the whole algorithm.
KW - Distributed Learning
KW - Multi-Task Learning
KW - Support Vector Machines
KW - Transfer Learning
UR - http://www.scopus.com/inward/record.url?scp=85048587430&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85048587430&partnerID=8YFLogxK
U2 - 10.1109/CISS.2018.8362195
DO - 10.1109/CISS.2018.8362195
M3 - Conference contribution
AN - SCOPUS:85048587430
T3 - 2018 52nd Annual Conference on Information Sciences and Systems, CISS 2018
SP - 1
EP - 6
BT - 2018 52nd Annual Conference on Information Sciences and Systems, CISS 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 52nd Annual Conference on Information Sciences and Systems, CISS 2018
Y2 - 21 March 2018 through 23 March 2018
ER -