0% found this document useful (0 votes)
411 views

Finger Code

This document describes an algorithm for fingerprint classification and matching using a filterbank-based representation. The algorithm involves several steps: finding the core point, cropping the fingerprint image, sectorizing the image, applying a Gabor filterbank to extract features, and calculating a feature vector of variances. A fingerprint image is represented by a single core point when added to the database, while multiple candidate core points are considered when matching to an input image.

Uploaded by

Periya Samy
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
411 views

Finger Code

This document describes an algorithm for fingerprint classification and matching using a filterbank-based representation. The algorithm involves several steps: finding the core point, cropping the fingerprint image, sectorizing the image, applying a Gabor filterbank to extract features, and calculating a feature vector of variances. A fingerprint image is represented by a single core point when added to the database, while multiple candidate core points are considered when matching to an input image.

Uploaded by

Periya Samy
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 3

A Filterbank-based Representation for Classification and Matching of Fingerprints

Figure 1 Description of the algorithm

Step
Find the center (core point localization) Crop Sectorization Normalize Filter-bank (8 filters) Feature Vector of Variances

Corresponding M-function
Supercore.m Cropping.m Whichsector.m Sector_norm.m Gabor2d_sub.m Sector_norm.m

Choice/Reject or add to DataBase

Fprec.m (main file)

Up to now, when a new fingerprint image is added to database, the FingerCode was calculated two times: one time for input image and a second time for the image rotated of a proper angle (22.5/2 degrees) in order to make the process rotation-invariant (see the cited reference for more details). The image was rotated using the Matlab function imrotate. This procedure can introduce noise. To avoid this behavior we calculated the FingerCode associated to the rotated image in this way: we rotate sectorization and the orientation of Gabor filters of filter-bank of the same angle (22.5/2 degree). This is equivalent to consider as filter-bank input a rotated image. When a new fingerprint image is added to database, only one core point is found. On the other side, when an input image is selected for fingerprint matching, a list of candidates for core point is found and the matching is performed for each of them. At last only the candidate with the smallest distance is considered. For example , in database I have 3 images Img1, Img2 and Img3. Each of them is characterized only by one core point, so I will have 3 core points, each of them associated to an image present in database. If I select an image for fingerprint matching (let it be ImgNew) I found for it a certain number of core point (let it N). For each of these N core points (candidates) I will find the nearest fingerprint image present in database. At last I will obtain N distances (as the number of core points candidates): I say that the recognized image is the image with the nearest distance I have obtained (this distance is associated to one of the initial N core points candidates of ImgNew). This approach is very similar to the algorithm discussed in Erian Bezhani, Dequn Sun, Jean-Luc Nagel, and Sergio Carrato, "Optimized filterbank fingerprint recognition", Proc. SPIE Intern. Symp. Electronic Imaging 2003, 20-24 Jan. 2003, Santa Clara, California.

The pixel-wise orientation field estimation (the M-function is orientation_image_luigiopt.m) is greatly accelerated reusing previous sum computations. The sum of elements of a block centered at pixel (I,J) can be used for the computation of the sum of block elements centered at pixel (I,J+1). This can be performed in the following way:

Once the sum of values centered at pixel (I, J) has been calculated (sum of yellow pixels and orange pixels of Figure 1), in order to calculate the sum centered at pixel (I, J+1) we simply subtract from the previous sum the yellow area and add the green area (see Figure 2): in this way it is possible to save a lot of computation. In other words SUM (I, J) = yellow + orange SUM (I, J+1) = SUM (I, J) yellow + green. For more details concerning the implementation of this algorithm please visit http://www.ece.cmu.edu/~ee551/Old_projects/projects/s99_19/finalreport.html The function Supercore.m is discussed in detail in the document corepoint.doc References A. K. Jain, S. Prabhakar, and S. Pankanti, "A Filterbank-based Representation for Classification and Matching of Fingerprints", International Joint Conference on Neural Networks (IJCNN), pp. 3284-3285, Washington DC, July 10-16, 1999. http://www.cse.msu.edu/~prabhaka/publications.html

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy