Mitigating Reverse Engineering Attacks on Local Feature Descriptors

Deeksha Dangwal, Vincent T. Lee, Hyo Jin Kim, Tianwei Shen, Meghan Cowan, Rajvi Shah, Caroline Trippel, Brandon Reagen, Timothy Sherwood, Vasileios Balntas, Armin Alaghi, Eddy Ilg

Research output: Contribution to conferencePaperpeer-review


As autonomous driving and augmented reality evolve a practical concern is data privacy, notably when these applications rely on user image-based localization. The widely adopted technology uses local feature descriptors derived from the images. While it was long thought that they could not be reverted back, recent work has demonstrated that under certain conditions reverse engineering attacks are possible and allow an adversary to reconstruct RGB user images. This poses a potential risk to user privacy. We take this further and model potential adversaries using a privacy threat model. We show a reverse engineering attack on sparse feature maps under controlled conditions and analyze the vulnerability of popular descriptors including FREAK, SIFT and SOSNet. Finally, we evaluate potential mitigation techniques that select a subset of descriptors to carefully balance privacy reconstruction risk. While preserving image matching accuracy, our results show that similar accuracy can be obtained when revealing less information.

Original languageEnglish (US)
StatePublished - 2021
Event32nd British Machine Vision Conference, BMVC 2021 - Virtual, Online
Duration: Nov 22 2021Nov 25 2021


Conference32nd British Machine Vision Conference, BMVC 2021
CityVirtual, Online

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Mitigating Reverse Engineering Attacks on Local Feature Descriptors'. Together they form a unique fingerprint.

Cite this