Abstract
Aging in face images is a type of intra-class variation that has a stronger impact on the performance of biometric recognition systems than other modalities (such as iris scans and fingerprints). Improving the robustness of automated face recognition systems with respect to aging requires high quality longitudinal datasets that should contain images belonging to a large number of individuals collected across a long time span, ideally decades apart. Unfortunately, there is a dearth of such good operational quality longitudinal datasets. Synthesizing longitudinal data that meet these requirements can be achieved using modern generative models. However, these tools may produce unrealistic artifacts or compromise the biometric quality of the age-edited images. In this work, we simulate facial aging and de-aging by leveraging text-to-image diffusion models with the aid of few-shot fine-tuning and intuitive textual prompting. Our method is supervised using identity-preserving loss functions that ensure biometric utility preservation while imparting a high degree of visual realism. We ablate our method using different datasets, state-of-the art face matchers and age classification networks. Our empirical analysis validates the success of the proposed method compared to existing schemes. Our code is available at https://github.com/sudban3089/ID-Preserving-Facial-Aging.git
Original language | English (US) |
---|---|
Pages (from-to) | 1 |
Number of pages | 1 |
Journal | IEEE Transactions on Biometrics, Behavior, and Identity Science |
DOIs | |
State | Accepted/In press - 2024 |
Keywords
- Age editing
- Aging
- Biological system modeling
- Biometrics (access control)
- Diffusion models
- Face recognition
- Face recognition
- Faces
- Training
- Vectors
ASJC Scopus subject areas
- Instrumentation
- Computer Vision and Pattern Recognition
- Computer Science Applications
- Artificial Intelligence