Abstract
We propose a framework in which multiple entities collaborate to build a machine learning model while preserving privacy of their data. The approach utilizes feature embeddings from shared/per-entity feature extractors transforming data into a feature space for cooperation between entities. We propose two specific methods and compare them with a baseline method. In Shared Feature Extractor (SFE) Learning, the entities use a shared feature extractor to compute feature embeddings of samples. In Locally Trained Feature Extractor (LTFE) Learning, each entity uses a separate feature extractor, and models are trained using concatenated features from all entities. As a baseline, in Cooperatively Trained Feature Extractor (CTFE) Learning, the entities train models by sharing raw data. Secure multi-party algorithms are utilized to train models without revealing data or features in plain text. We investigate the trade-offs among SFE, LTFE, and CTFE in regard to performance, privacy leakage (using an off-the-shelf membership inference attack), and computational cost. LTFE provides the most privacy, followed by SFE, and then CTFE. Computational cost is lowest for SFE and the relative speed of CTFE and LTFE depends on network architecture. CTFE and LTFE provide the best accuracy. We use three different datasets for evaluations.
Original language | English (US) |
---|---|
Pages (from-to) | 486-498 |
Number of pages | 13 |
Journal | IEEE Transactions on Dependable and Secure Computing |
Volume | 21 |
Issue number | 1 |
DOIs | |
State | Published - Jan 1 2024 |
Keywords
- Collaborative learning
- feature extractor
- neural networks
- privacy-preserving training
- secure multiparty computation
ASJC Scopus subject areas
- General Computer Science
- Electrical and Electronic Engineering