Clearview Reveals Biometric Presentation Attack Detection Feature, Talks Training and Testing

A new presentation attack detection feature has been added to Clearview AI’s Clearview Consent API to allow developers to integrate spoofing detection into identity verification solutions.

Clearview Consent was launched just a few months ago to bring the company’s facial recognition algorithms to a whole new set of use cases as a selfie biometrics tool, and the addition of detection capabilities Presentation Attack is the next step in its development, according to people who have made it.

Clearview considered a range of approaches, and CEO Hoan Ton-That points out that developers typically don’t have access to the specialized hardware behind device-based 3D biometric systems.

Early engagement with Clearview Consent customers has led to a better understanding of how companies and developers plan to use it, which has not only convinced the company to pursue liveness detection based on 2D images, but also to imagine a range of applications.

“We’re also looking at passive liveliness video, but some vendors have told us, ‘We have these old profiles, and we want to know how many of them are deepfakes and how many are pitch attacks,'” Clearview Ton-That said. . Biometric update in an interview.

It tells the story of a crypto platform that reviewed images accepted by its KYC provider and found photos and impressions of faces.

Clearview’s technology focuses on single images from commercial RGB images, said vice president of research Terence Liu Biometric update during the same video call.

Clearview takes an all-encompassing approach, combining models that look for different things, Liu says.

He shared a demo of the software, which scans for replay attacks and masks separately. In a few of the many instances in the demo, the software detected both in an image that was clearly a replay to the human eye. This, Ton-That explains, is due to the threshold settings.

Settings can be customized for different applications from the API, and Clearview provides recommended settings.

under the hood

Asked about indications that low-quality spoofs and deepfakes can better evade detection by certain systems designed to spot them, Liu has a technical account ready to explain why.

“If you design your model to target the very fine differentiation between a real face and a really good mask face, your model is going to specialize in that category, but will likely miss the other things,” he explains. “So you want to have a model set approach. You have a coarse filter to get rid of all the edge cases, another that zooms in on specific cases, and then you can limit the data type of training you provide to each of the models.”

“A lot of these commercial masks have very strange wrinkle patterns,” Liu notes.

Ton-That emphasizes the importance of the volume of training data and notes the relatively small size of the training data sets that Clearview had for the PAD algorithms when it started working on the solution.

“We could augment the training sets in a pretty fun way, putting those mask photos in Clearview and finding a lot more mask examples.” An example is a mask of the main character from the television show Breaking Bad, which has been found in images from different angles and in a wide range of lighting conditions.

According to Liu, the continued expansion of larger and deeper neural networks has boosted AI applications over hand-picked features in many fields.

“I see a similar trend happening in this also long-standing area of ​​presentation attack detection,” he says.

The field of biometric PAD has come a long way from techniques focused on particular characteristics in the past, such as vascular patterns. Now training neural networks comes down to a simple question, says Liu: “Does this score better, in terms of accuracy, on my dataset, and does my dataset come closer ground truth for the application that interests me.”

Concerns about the source of the data Clearview uses to train its algorithms continue to be voiced by the ACLU, but Ton-That said Biometric update when Clearview Consent launched that the company “doesn’t anticipate any issues” on that front.

The scaling challenges in biometrics development, for training data and volumes themselves, are familiar and help explain gaps in the performance of some algorithms.

“Models are very smart, but it’s not as smart if you don’t see the data for a particular task,” Liu says. “The industry, the whole computer vision industry, not just biometrics, is aware of this. That’s why all these great transformer models are pre-trained. I’m anticipating a similar revolution the way whose linguistic models have revolutionized this field.

Customer engagement with Clearview Consent has been strong so far, according to Ton-That, with a KYC platform, BNPL provider and school safety app among its early adopters. The recovery has been entirely organic so far, and adopting companies are always taking longer. He’s optimistic that cloud and Docker deployment options and per-request pricing will help attract more customers.

The company is excited about NIST’s planned PAD testing, and Ton-That says it is investigating all testing options.

“The more testing the better, and the more standardization we have around it the better,” he says.

Ton-That and Liu’s view on testing aligns with their view on training data, with Liu noting that testing should be as broad as possible “to avoid this gap in the field.”

Clearview Consent PAD functionality is now available via API.

Article topics

biometric liveness detection | biometrics | AI Clearview | Clearview Consent | facial biometrics | identity verification | presentation attack detection | research and development | spoofing detection

Comments are closed.