TY - GEN
T1 - Approaches to adversarial drift
AU - Kantchelian, Alex
AU - Afroz, Sadia
AU - Huang, Ling
AU - Islam, Aylin Caliskan
AU - Miller, Brad
AU - Tschantz, Michael Carl
AU - Greenstadt, Rachel
AU - Joseph, Anthony D.
AU - Tygar, J. D.
N1 - Copyright:
Copyright 2013 Elsevier B.V., All rights reserved.
PY - 2013
Y1 - 2013
N2 - In this position paper, we argue that to be of practical interest, a machine-learning based security system must engage with the human operators beyond feature engineering and instance labeling to address the challenge of drift in adversarial environments. We propose that designers of such systems broaden the classification goal into an explanatory goal, which would deepen the interaction with system's operators. To provide guidance, we advocate for an approach based on maintaining one classifier for each class of unwanted activity to be filtered. We also emphasize the necessity for the system to be responsive to the operators constant curation of the training set. We show how this paradigm provides a property we call isolation and how it relates to classical causative attacks. In order to demonstrate the effects of drift on a binary classification task, we also report on two experiments using a previously unpublished malware data set where each instance is timestamped according to when it was seen.
AB - In this position paper, we argue that to be of practical interest, a machine-learning based security system must engage with the human operators beyond feature engineering and instance labeling to address the challenge of drift in adversarial environments. We propose that designers of such systems broaden the classification goal into an explanatory goal, which would deepen the interaction with system's operators. To provide guidance, we advocate for an approach based on maintaining one classifier for each class of unwanted activity to be filtered. We also emphasize the necessity for the system to be responsive to the operators constant curation of the training set. We show how this paradigm provides a property we call isolation and how it relates to classical causative attacks. In order to demonstrate the effects of drift on a binary classification task, we also report on two experiments using a previously unpublished malware data set where each instance is timestamped according to when it was seen.
KW - adversarial machine learning
KW - concept drift
KW - malware classification
UR - http://www.scopus.com/inward/record.url?scp=84888987784&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84888987784&partnerID=8YFLogxK
U2 - 10.1145/2517312.2517320
DO - 10.1145/2517312.2517320
M3 - Conference contribution
AN - SCOPUS:84888987784
SN - 9781450324885
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 99
EP - 109
BT - AISec 2013 - Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, Co-located with CCS 2013
T2 - 2013 6th Annual ACM Workshop on Artificial Intelligence and Security, AISec 2013, Co-located with the 20th ACM Conference on Computer and Communications Security, CCS 2013
Y2 - 4 November 2013 through 4 November 2013
ER -