Wednesday , January 27 2021

Prove that it is possible to harm fingerprint readers using fingerprints

Basic fingerprints are real or synthetic fingerprints that happen to fit a large number of real people's fingerprints. In this work, the team of researchers from New York University and the State University of Michigan (MSU) created basic fingerprint images using a method known as a variable variable model and using a learning technology called DeepMasterPrints, % And enable fingerprints used in detection systems that can be exploited by an attack similar to "dictionary attacks".

In an article presented at the Biometric Security Conference (BTAS 2018), experts explain that to create DeepMasterPrint experts took into account two things. On the one hand, for ergonomic reasons many times fingerprint sensors are very small (like smartphones), making them work through part of the user's fingerprint image. Therefore, since identifying the identity through small parts of a fingerprint is not an easy task as it can be when reading a fingerprint control, the possibility that some fingerprint of a finger is incompatible with another part of the fingerprint of another finger is high. Investigator Roy Edity took this into account and introduced the idea of ​​Master Fingerprints, which are a set of real or synthetic fingerprints that may coincide with a large number of other fingerprints.

The second thing they took into account is that some fingerprints have common characteristics to each other. That is, a false fingerprint containing many common properties has a more real chance of matching other fingerprints.

Hence, the researchers used a kind of artificial intelligence algorithm called an "antagonistic gene network" to artificially create new fingerprints that can fit as many fingerprints as possible. Thus they developed an artificial fingerprint library used as key keys for a particular biometric identification system. In addition, there is no need to have a fingerprint sample belonging to a particular person, but it can be performed against anonymous topics and still have some margin of success.

Although it is very difficult for an attacker to use something similar to DeepMasterPrint because it requires a lot of work to optimize artificial intelligence into a specific system, each system is different, an example of what can happen over time and something to be aware of. Something similar was seen this year at the Black Hat Security Conference, when IBM researchers proved by proving that malware can be developed that use artificial intelligence to perform face-recognition attacks.

Source link