Dictionary Attacks on Speaker Verification

Mirko Marras, Pawel Korus, Anubhav Jain, Nasir Memon

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we propose dictionary attacks against speaker verification - a novel attack vector that aims to match a large fraction of speaker population by chance. We introduce a generic formulation of the attack that can be used with various speech representations and threat models. The attacker uses adversarial optimization to maximize raw similarity of speaker embeddings between a seed speech sample and a proxy population. The resulting master voice successfully matches a non-trivial fraction of people in an unknown population. Adversarial waveforms obtained with our approach can match on average 69% of females and 38% of males enrolled in the target system at a strict decision threshold calibrated to yield false alarm rate of 1%. By using the attack with a black-box voice cloning system, we obtain master voices that are effective in the most challenging conditions and transferable between speaker encoders. We also show that, combined with multiple attempts, this attack opens even more to serious issues on the security of these systems.

Original languageEnglish (US)
Pages (from-to)1
Number of pages1
JournalIEEE Transactions on Information Forensics and Security
Volume18
DOIs
StatePublished - 2023

Keywords

  • Dictionaries
  • Fingerprint recognition
  • Optimization
  • Perturbation methods
  • Psychoacoustic models
  • Sociology
  • Statistics

ASJC Scopus subject areas

  • Safety, Risk, Reliability and Quality
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Dictionary Attacks on Speaker Verification'. Together they form a unique fingerprint.

Cite this