TY - JOUR
T1 - Deep learning models for electrocardiograms are susceptible to adversarial attack
AU - Han, Xintian
AU - Hu, Yuxuan
AU - Foschini, Luca
AU - Chinitz, Larry
AU - Jankelson, Lior
AU - Ranganath, Rajesh
N1 - Publisher Copyright:
© 2020, The Author(s), under exclusive licence to Springer Nature America, Inc.
PY - 2020/3/1
Y1 - 2020/3/1
N2 - Electrocardiogram (ECG) acquisition is increasingly widespread in medical and commercial devices, necessitating the development of automated interpretation strategies. Recently, deep neural networks have been used to automatically analyze ECG tracings and outperform physicians in detecting certain rhythm irregularities1. However, deep learning classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the example to the wrong class, but which are undetectable to the human eye2,3. Adversarial examples have also been created for medical-related tasks4,5. However, traditional attack methods to create adversarial examples do not extend directly to ECG signals, as such methods introduce square-wave artefacts that are not physiologically plausible. Here we develop a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation and show that a deep learning model for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack. Moreover, we provide a general technique for collating and perturbing known adversarial examples to create multiple new ones. The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist.
AB - Electrocardiogram (ECG) acquisition is increasingly widespread in medical and commercial devices, necessitating the development of automated interpretation strategies. Recently, deep neural networks have been used to automatically analyze ECG tracings and outperform physicians in detecting certain rhythm irregularities1. However, deep learning classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the example to the wrong class, but which are undetectable to the human eye2,3. Adversarial examples have also been created for medical-related tasks4,5. However, traditional attack methods to create adversarial examples do not extend directly to ECG signals, as such methods introduce square-wave artefacts that are not physiologically plausible. Here we develop a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation and show that a deep learning model for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack. Moreover, we provide a general technique for collating and perturbing known adversarial examples to create multiple new ones. The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist.
UR - http://www.scopus.com/inward/record.url?scp=85081611896&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081611896&partnerID=8YFLogxK
U2 - 10.1038/s41591-020-0791-x
DO - 10.1038/s41591-020-0791-x
M3 - Letter
C2 - 32152582
AN - SCOPUS:85081611896
SN - 1078-8956
VL - 26
SP - 360
EP - 363
JO - Nature Medicine
JF - Nature Medicine
IS - 3
ER -