TY - JOUR
T1 - Adversarial optimization for dictionary attacks on speaker verification
AU - Marras, Mirko
AU - Korus, Paweł
AU - Memon, Nasir
AU - Fenu, Gianni
N1 - Funding Information:
Marras thanks Sardinia Regional Government for financial support (P.O.R. Sardegna F.S.E. Oper. Prog. of Auton. Region of Sardinia, E.S.F. 2014–2020, Axis III, TG 10, PoI 10ii, SG 10.5). This work has been supported by Italian Ministry of Education, University and Research under ”iLearnTV” Project (DD n.1937 5.6.2014, CUP F74G14000200008 F19G14000910008).
Funding Information:
Marras thanks Sardinia Regional Government for financial support (P.O.R. Sardegna F.S.E. Oper. Prog. of Auton. Region of Sardinia, E.S.F. 2014-2020, Axis III, TG 10, PoI 10ii, SG 10.5). This work has been supported by Italian Ministry of Education, University and Research under”iLearnTV” Project (DD n.1937 5.6.2014, CUP F74G14000200008 F19G14000910008).
Publisher Copyright:
Copyright © 2019 ISCA
PY - 2019
Y1 - 2019
N2 - In this paper, we assess vulnerability of speaker verification systems to dictionary attacks. We seek master voices, i.e., adversarial utterances optimized to match against a large number of users by pure chance. First, we perform menagerie analysis to identify utterances which intrinsically hold this property. Then, we propose an adversarial optimization approach for generating master voices synthetically. Our experiments show that, even in the most secure configuration, on average, a master voice can match approx. 20% of females and 10% of males without any knowledge about the population. We demonstrate that dictionary attacks should be considered as a feasible threat model for sensitive and high-stakes deployments of speaker verification.
AB - In this paper, we assess vulnerability of speaker verification systems to dictionary attacks. We seek master voices, i.e., adversarial utterances optimized to match against a large number of users by pure chance. First, we perform menagerie analysis to identify utterances which intrinsically hold this property. Then, we propose an adversarial optimization approach for generating master voices synthetically. Our experiments show that, even in the most secure configuration, on average, a master voice can match approx. 20% of females and 10% of males without any knowledge about the population. We demonstrate that dictionary attacks should be considered as a feasible threat model for sensitive and high-stakes deployments of speaker verification.
KW - Adversarial Examples
KW - Authentication
KW - Biometrics
KW - Dictionary Attacks
KW - Speaker Verification
UR - http://www.scopus.com/inward/record.url?scp=85074694088&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074694088&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2019-2430
DO - 10.21437/Interspeech.2019-2430
M3 - Conference article
AN - SCOPUS:85074694088
SN - 2308-457X
VL - 2019-September
SP - 2913
EP - 2917
JO - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
JF - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
T2 - 20th Annual Conference of the International Speech Communication Association: Crossroads of Speech and Language, INTERSPEECH 2019
Y2 - 15 September 2019 through 19 September 2019
ER -