Abstract
While inference is needed at the edge, training is typically done at the cloud. Therefore, data necessary for training a model, as well as the trained model, have to be transmitted back and forth between the edge and the cloud training infrastructure. This creates significant security issues, including the inclusion of a backdoor sent to the user without the user's knowledge. This article presents an approach where a trained model can still operate as expected, irrespective of the presence of such a backdoor. - Theocharis Theocharides, University of Cyprus - Muhammad Shafique, Technische Universität Wien.
Original language | English (US) |
---|---|
Article number | 8963957 |
Pages (from-to) | 103-110 |
Number of pages | 8 |
Journal | IEEE Design and Test |
Volume | 37 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2020 |
Keywords
- Defense against model backdooring
- Poisoning attacks
- attacks on DNNs
- backdoor suppression
ASJC Scopus subject areas
- Software
- Hardware and Architecture
- Electrical and Electronic Engineering