Dictionary Attacks on Speaker Verification

Mirko Marras, Pawel Korus, Anubhav Jain, Nasir Memon

Research output: Contribution to journalArticlepeer-review


In this paper, we propose dictionary attacks against speaker verification-a novel attack vector that aims to match a large fraction of speaker population by chance. We introduce a generic formulation of the attack that can be used with various speech representations and threat models. The attacker uses adversarial optimization to maximize raw similarity of speaker embeddings between a seed speech sample and a proxy population. The resulting master voice successfully matches a non-trivial fraction of people in an unknown population. Adversarial waveforms obtained with our approach can match on average 69% of females and 38% of males enrolled in the target system at a strict decision threshold calibrated to yield false alarm rate of 1%. By using the attack with a black-box voice cloning system, we obtain master voices that are effective in the most challenging conditions and transferable between speaker encoders. We also show that, combined with multiple attempts, this attack opens even more to serious issues on the security of these systems.

Original languageEnglish (US)
Pages (from-to)773-788
Number of pages16
JournalIEEE Transactions on Information Forensics and Security
StatePublished - 2023


  • Authentication
  • adversarial machine learning
  • biometrics (access control)
  • impersonation attacks
  • speaker recognition

ASJC Scopus subject areas

  • Safety, Risk, Reliability and Quality
  • Computer Networks and Communications


Dive into the research topics of 'Dictionary Attacks on Speaker Verification'. Together they form a unique fingerprint.

Cite this