Carter Yagemann

I'm a computer scientist and cybersecurity researcher. My interests include hacking, system design, and software engineering.

"Justitia" Biometric Privacy to Appear in ASIACCS'21

My coauthors and I will be presenting the paper "Cryptographic Key Derivation from Biometric Inferences for Remote Authentication" at Asia CCS 2021 in June of next year. Below is a preview of the abstract:

Biometric authentication is getting increasingly popular because of its appealing usability and improvements in biometric sensors. At the same time, it raises serious privacy concerns since the common deployment involves storing bio-templates in remote servers. Current solutions propose to keep these templates on the client's device, outside the server's reach. This binds the client to the initial device. A more attractive solution is to have the server authenticate the client, thereby decoupling them from the device.

Unfortunately, existing biometric template protection schemes either suffer from the practicality or accuracy. The state-of-the-art deep learning (DL) solutions solve the accuracy problem in face- and voice-based verification. However, existing privacy-preserving methods do not accommodate the DL methods, as they are tailored to the hand-crafted feature space of specific modalities in general.

In this work, we propose a novel pipeline, Justitia, that makes DL-inferences of face and voice biometrics compatible with the standard privacy-preserving primitives, like fuzzy extractors (FE). For this, we first form a bridge between Euclidean (or cosine) space of DL and Hamming space of FE, while maintaining the accuracy and privacy of underlying schemes. We also introduce efficient noise handling methods to keep the FE scheme practically applicable.

We implement an end-to-end prototype to evaluate our design, then show how to improve the security for sensitive authentications and usability for non-sensitive, day-to-day, authentications. Justitia achieves the same, 0.33% false rejection at zero false acceptance, errors as the plaintext baseline does on the YouTube Faces benchmark. Moreover, combining face and voice achieves 1.32% false rejection at zero false acceptance. According to our systematical security assessments conducted through prior approaches and our novel black-box method, Justitia, achieves ~25 bits and ~33 bits of security guarantees for face- and face&voice-based pipelines, respectively.