Backdoor suppression in neural networks using input fuzzing and majority voting

Esha Sarkar, Yousif Alkindi, Michail Maniatakos

Research output: Contribution to journalArticlepeer-review

Abstract

While inference is needed at the edge, training is typically done at the cloud. Therefore, data necessary for training a model, as well as the trained model, have to be transmitted back and forth between the edge and the cloud training infrastructure. This creates significant security issues, including the inclusion of a backdoor sent to the user without the user's knowledge. This article presents an approach where a trained model can still operate as expected, irrespective of the presence of such a backdoor. - Theocharis Theocharides, University of Cyprus - Muhammad Shafique, Technische Universität Wien.

Original languageEnglish (US)
Article number8963957
Pages (from-to)103-110
Number of pages8
JournalIEEE Design and Test
Volume37
Issue number2
DOIs
StatePublished - Apr 2020

Keywords

  • Defense against model backdooring
  • Poisoning attacks
  • attacks on DNNs
  • backdoor suppression

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Backdoor suppression in neural networks using input fuzzing and majority voting'. Together they form a unique fingerprint.

Cite this