Microsoft Word - Maj Doc



CHAPTER 1INTRODUCTIONThe automatic verification of the identities of individuals is becoming an increasingly important requirement in a variety of applications, especially, those involving automatic access control. Examples of such applications are teleshopping, telebanking, physical access control, and the withdrawal of money from automatic telling machines (ATMs). Traditionally, passwords, personal cards, PIN-numbers and keys have been used in this context. However, security can easily be breached in these systems when a card or key is lost or stolen or when a password is compromised. Furthermore, difficult passwords may be hard to remember by a legitimate user and simple passwords are easy to guess by an impostor. The use of biometrics offers an alternative means of identification which helps avoid the problems associated with conventional methods. The word biometrics is defined as the recognition of an individual by checking the measurements of certain physical characteristics or personal traits against a database. Recognition could be by measurement of features in any of the three biometric categories: intrinsic; extrinsic; and hybrid. Intrinsic biometrics identifies the individual’s generic make-up (e.g. fingerprint or iris patterns). Extrinsic biometrics involves the individual’s learnt behavior (e.g. signature or keystrokes). Finally, hybrid biometrics is based on a combination of the individual’s physical characteristics and personal traits (e.g. voice characteristics). A critical question is what biological (physical characteristics/ personal traits) measurements qualify to be a biometric. Any human trait can be considered as a biometric characteristic as long as it satisfies the following requirements.Universality: each person should have the selected biometric identifier.Distinctiveness: any two persons should be sufficiently different in terms of the selected biometric identifier.Permanence: the biometric identifier should be sufficiently invariant over a given period of time. Collectability: the biometric identifier should be measurable quantitatively. 2In real life applications, there are a number of additional factors which should be considered: Performance: which includes accuracy, speed and resource requirements; Acceptability: the willingness of people to accept the biometric identifier in their daily lives.Circumvention: it should be sufficiently robust to withstand various fraudulent practices1.1 Biometric Systems A simple biometric system consists of four basic components. Sensor module: this component is for acquiring the biometric data; Feature extraction module: the data obtained from the sensor is used to compute a set of feature vectors; Matching module: the feature vectors generated via the previous component are checked against those in the template; Decision making module: to accept or reject the claimed identity or to establish a user’s identity. Figure 1.1 General Scheme of a Biometric System.In general, a biometric recognition system involves two stages of operation. The first of these is the enrolment. There are two general processes in this stage. The first is acquisition of the user's biometric data, by means of a biometric reader appropriate to the data sought. The second concerns storage of the biometric data for each user in a reference database. This can be in a variety of forms including a template or a statistical model generated using the raw data. Whichever method is used, the stored data is labelled according to user identity to facilitate subsequent authentication. The second stage of operation is termed testing. In this stage, the test biometric data obtained from the user is checked against the reference database for the purpose of recognition. A biometric recognition system can operate in one of the two modes of verification and identification. In the verification mode, the user also makes an identity claim. In this case the test data is compared only against the reference data associated with the claimed identity. The result of this comparison is used to accept or reject the identity claim. In identification, the test data is compared against the data for all the registered individuals to determine the identity of the user. Thus, verification and identification are two distinct issues having their own inherent complexities 1.2 Motivation behind Multimodal BiometricsDespite considerable advances in recent years, there are still serious challenges in obtaining reliable authentication through unimodal biometric systems. These are due to a variety of reasons. For instance, there are problems with enrolment due to the nonuniversal nature of relevant biometric traits. Equally troublesome is biometric spoofing. Moreover, the environmental noise effects on the data acquisition process can lead to deficient accuracy which may disable systems, virtually from inception. Speaker verification, for instance, degrades rapidly in noisy environments. Similarly, the effectiveness of face verification depends strongly on lighting conditions and on variations in the subject’s pose before the camera. Some of the limitations imposed by unimodal biometrics systems can be overcome by using multiple biometric modalities. Multiple evidence provision through multimodal biometric data acquisition may focus on multiple samples of a single biometric trait, designated as multi-sample biometrics. It may also focus on samples of multiple biometric types. This is termed multimodal biometrics. Higher accuracy and greater resistance to spoofing are basic advantages of multimodal biometrics over unimodal biometrics. Multimodal biometrics involves the use of complementary information as well as making it difficult for an intruder to spoof simultaneously the multiple biometric traits of a registered user. In addition, the problem of non-universality is largely overcome, since multiple traits can ensure sufficient population coverage. Because of these advantages of multimodal biometrics, a multimodal biometric system is preferred over single modality even though the storage requirements, processing time and the computational demands of a multimodal biometric system are much higher. The fusion of the complementary information in multimodal biometric data has been a research area of considerable interest, as it plays a critical role in overcoming certain important limitations of unimodal systems. The efforts in this area are mainly focused on fusing the information obtained from a variety of independent modalities. For instance, a popular approach is to combine face and speech modalities to achieve a more reliable recognition of individuals. Through such an approach, separate information from different modalities is used to provide complementary evidence about the identity of the users. In such scenarios, fusion is normally at the score level. This is because the individual modalities provide different raw data types, and involve different classification methods for discrimination. These range from the use of different weighting schemes that assign weights to the information streams according to their information content, to support vector machines which use the principle of obtaining the best possible boundary for classification, according to the training data. Despite these developments, the literature lacks a thorough comparison of various fusion methods for multimodal biometrics. The purpose of the present work is to examine whether the performance of a biometric system can be improved by integrating complementary information which comes primarily from different modalities (multimodality). Variations are reflected in the corresponding biometric scores, and thereby can adversely influence the overall effectiveness of biometric recognition. Therefore, an important requirement for the effective operation of a multimodal biometric system in practice is minimization of the effects of variations in the data from the individual modalities deployed. This would allow maximization of the recognition accuracy in the presence of variation (e.g. due to contamination) in some or all types of biometric data 6 involved. However, this is a challenging requirement as the data variation can be due to a variety of reasons, and can have different characteristics. Another aspect of difficulty in multimodal biometrics is the lack of information about the relative variation in the different types of biometric data. The term data variation, as used in this thesis, is now subdivided into two types. These are, variation in each data type arising from uncontrolled operating conditions, and variation in the relative degradation of data. The former variation can be due to operating in uncontrolled conditions (e.g. poor illumination of a user’s face in face recognition, background noise in voice biometrics, etc.), or user generated (e.g. uncharacteristic sounds from speakers, carelessness in using the sensor for providing fingerprint samples, etc.). The variation in the relative degradation of data is due to the fact that in multimodal biometrics different data types are normally obtained through independent sensors and data capturing apparatus. Therefore, any data variation of the former type (discussed above) may in fact result in variation in the relative degradation (or goodness) of different biometric data deployed. Since, in practice, it may not be possible to fully compensate for the degradation in all biometric data types involved, the relative degradation of data appears as another important consideration in multimodal biometrics. 1.3 Aims and ObjectivesThe aim and objective of the present study isTo design and develop Multimodal Biometric Authentication System with more accuracy compared to Unimodal Biometric Systems.To achieve this objective some sub-objectives were also formed, which are given below:Study of concepts of face recognition and fingerprint recognition and methods.Developing platform for capturing face and fingerprint samples of users for creation of database.Creation of master face database and master fingerprint database.Designing and developing unimodal biometric face recognition system and performance evaluation.Designing and developing unimodal biometric fingerprint recognition system and performance evaluation.Study of existing multimodal biometric system and designing and developing multimodal biometric system using face and fingerprint recognition.Designing and developing multimodal biometric system using face and fingerprint recognition. 1.4 Face RecognitionFace is a complex multidimensional structure and needs good computing techniques for recognition. The face is our primary and ?rst focus of attention in social life playing an important role in identity of individual. We can recognize a number of faces learned throughout our lifespan and identify that faces at a glance even after years. There may be variations in faces due to aging and distractions like beard, glasses or change of hairstyles.Face recognition is an integral part of biometrics. Facial features are extracted and implemented through algorithms which are e?cient and some modi?cations are done to improve the existing algorithm models. Computers that detect and recognize faces could be applied to a wide variety of practical applications including criminal identi?cation, security systems, identity veri?cation etc. Face recognition and detection can be achieved using technologies related to computer science. Features extracted from a face are processed and compared with similarly processed faces present in the database. If a face is recognized it is known or the system may show a similar face existing in database else it is unknown. In surveillance system if an unknown face appears more than one time then it is stored in database for further recognition. These steps are very useful in criminal identi?cation. In general, face recognition techniques can be divided into two groups based on the face representation they use appearance-based, which uses holistic texture features and is applied to either whole-face or speci?c regions in a face image and feature-based, which uses geometric facial features (mouth, eyes, brows, cheeks etc.), and geometric relationships between them.1.4.1 Different Approaches of Face RecognitionThere are two predominant approaches to the face recognition problem: Geometric (feature based) and photometric (view based). After that many different algorithms were developed, three of which have been well studied in face recognition literature. Recognition algorithms can be divided into two main approaches: 1. Geometric: It is based on geometrical relationship between facial landmarks, or in other words the spatial configuration of facial features. That means that the main geometrical features of the face such as the eyes, nose and mouth are first located and then faces are classified on the basis of various geometrical distances and angles between features. 2. Photometric stereo: It is used to recover the shape of an object from a number of images taken under different lighting conditions. The shape of the recovered object is defined by a gradient map, which is made up of an array of surface normal.Popular recognition algorithms include: 1. Principal Component Analysis using Eigenfaces (PCA) 2. Linear Discriminate Analysis3. Elastic Bunch Graph Matching using the Fisher face algorithm1.5 Face DetectionFace detection involves separating image windows into two classes; one containing face turning the background (clutter). It is difficult because although commonalities exist between faces, they can vary considerably in terms of age, skin color and facial expression. The problem is further complicated by differing lighting conditions, image qualities and geometries, as well as the possibility of partial occlusion and disguise. An ideal face detector would therefore be able to detect the presence of any face under any set of lighting conditions, upon any background. The face detection task can be broken down into two steps. The first step is a classification task that takes some arbitrary image as input and outputs a binary value of yes or no, indicating whether there are any faces present in the image. The second step is the face localization task that aims to take an image as input and output the location of any face or faces within that image as some bounding box with (x, y, width, height). The face detection system can be divided into the following steps:1. Pre-Processing: To reduce the variability in the faces, the images are processed before they are fed into the network. All positive examples that is the face images are obtained by cropping images with frontal faces to include only the front view. All the cropped images are then corrected for lighting through standard algorithms. 2. Classification: Neural networks are implemented to classify the images as faces or nonfaces by training on these examples. We use both our implementation of the neural network and the MATLAB neural network toolbox for this task. Different network configurations are experimented with to optimize the results. 3. Localization: The trained neural network is then used to search for faces in an image and if present localize them in a bounding box.1.6 Fingerprint RecognitionFingerprints are the graphical flow-like ridges present on human fingers. Finger ridge configurations do not change throughout the life of an individual except due to accidents such as bruises and cuts on the fingertips. This property makes fingerprints a very attractive biometric identifier. Fingerprint-based personal identification has been used for a very long time. Owning to their distinctiveness and stability, fingerprints are the most widely used biometric features. Nowadays, most automatic fingerprint identification systems (AFIS) are based on matching minutiae, which are local ridge characteristics in the fingerprint pattern. The two most prominent minutiae types are ridge ending and ridge bifurcation. Based on the features that the matching algorithms use, fingerprint matching can be classified into image-based and graph-based matching. Image-based matching uses the entire gray scale fingerprint image as a template to match against input fingerprint images. The primary shortcoming of this method is that matching may be seriously affected by some factors such as contrast variation, image quality variation, and distortion, which are inherent properties of fingerprint images. The reason for such limitation lies in the fact that gray scale values of a fingerprint image are not stable features. Graph-based matching represents the minutiae in the form of graphs. The high computational complexity of graph matching hinders its implementation. To reduce the computational complexity, matching the minutiae sets of template and input fingerprint images can be done with point pattern matching. Several point patterns matching algorithms have been proposed and commented in the literature. Fingerprint system can be separated into two categories Verification and Identification. Verification system authenticates a person’s identity by comparing the captured biometric characteristic with its own biometric template(s) pre-stored in the system. It conducts one-to-one comparison to determine whether the identity claimed by the individual is true. A verification system either rejects or accepts the submitted claim of identity. Identification system recognizes an individual by searching the entire template database for a match. It conducts one-to-many comparisons to establish the identity of the individual. In an identification system, the system establishes a subject’s identity (or fails if the subject is not enrolled in the system database) without the subject having to claim an identity. In order to implement a successful algorithm of this nature, it is necessary to understand the topology of a fingerprint. A fingerprint consists of many ridges and valleys that run next to each other, ridges in black and valleys. The ridges bend in such ways as to form both local and global structures; either of which can be used to identify the fingerprint. The global level structures consist of many ridges that form arches, loops, whirls and other more detailed classifications. Global features shape a special pattern of ridge and valleys. On the other hand, the local level structures, called minutiae, are further classified as either end points or bifurcations. 1.6.1 Types of Minutiae ExtractionThere are a lot of minutiae extraction techniques available in the literature. There are generally four categories of detection algorithms based on image detection domain. They are:i) Direct level minutiae extractionii) Binary image - based minutiae extractioniii) Machine learning method andiv) Skeletonization -based minutiae extraction.In the first category, minutiae are extracted directly from the gray-level image without using binarization and thinning processes. The second category of method extracts minutiae from binary image profile patterns. The third category of method extracts minutiae via machine learning methods. The final category of method extracts minutiae from binary skeletons. Gray-level minutiae extraction algorithm works well only for good quality fingerprint images (Otsu 1979). Thinning-based minutiae detection algorithms are time-consuming, and the position of detected minutiae shifts by the ridge width. In the machine learning method, various techniques like Neural Networks, Genetic Programming, Reinforcement Learning and fuzzy logic are used to extract the minutiae points. It has been concluded that the traditional skeletonization – based Minutiae extraction method is more satisfactory than Genetic Programming and Reinforcement Learning. And hence the Skeletonization – based Minutiae Extraction is selected for implementation. In particular, the crossing –number concept is used for the implementation.Minutiae are also given an associated position and direction. Our procedure is mainly based on minutiae, as well as on global level structure for finding a reference point by which alignment of two template is to be accomplished.CHAPTER 2LITERATURE SURVEY2.1 A Multimodal Biometric Based User Identification SystemThis paper presents past research and development in the field of biometric technology. A Multimodal biometric identification system is to fuse two or more physical or behavioral traits. Multimodal biometric system is used to improve the accuracy. Multimodal biometric identification system based on face& fingerprint trait based on fuzzy logic is proposed. Typically, in multimodal biometric system each biometric trait is processed independently. The processed information is combined by using an appropriate fusion scheme. In this Similarity scores are generated for fingerprint and face image of a person to increase security by using matching score level fusion. The combination of these features improves the results of identification. They proposed a multimodal biometric identification system comprises of face and fingerprint traits. The Multimodal identification system is fused at matching level fusion using sum rule at the verification stage. The accuracy of the multimodal biometric system is very high that Multimodal authentication system overcomes the limitations of any individual biometric system. The multimodal biometric system provides more security to electronic data and resource access from unauthorized user.2.2 Problem DescriptionIn the unimodal biometrics, due to the imperfect acquisition conditions the captured biometric data might be distorted and enrolled user of the system might be incorrectly rejected. The usage of certain biometrics makes it susceptible to noisy data such as inability of scanner to read dirty fingerprints clearly. Bad data may lead to a false rejection leads to inaccurate matching. Unimodal biometrics is also prone to inter-class similarities with in large population groups. A facial recognition camera may not able to distinguish between two people if they are identical twins. Some biometric technologies can be incompatible with a certain subset of the population. When enrolling in a fingerprint system, elderly people and young children may have difficulty due to their faded prints or underdeveloped fingerprint ridges respectively. Unimodal biometric systems are vulnerable to spoof attacks. A pirate can steal the biometric information when it is stored on the computer. Some systems store biometric information through the software and keep it in the computer or on the disk that makes it vulnerable to copying from any other computer system that can access it. If that computer is connected to the web, someone in a foreign country can copy it.Biometric limitations like noisy sensor data, inter and intra user variations, spoof attacks, etc., are presented. A brief review on multimodal biometrics is given. The paper gives an overview of the biometrics. A. K. Jain [5] presented biometrics as a promising frontier for the identification. He describes biometric can be knowledge based like pass words or token based like ID cards. A comparison of face, finger print, hand and iris based on universality, acceptability, permanence, uniqueness is presented as a case study and summarized that biometrics will likely be used in almost every transaction. Even years after the author’s observation, biometrics has found its place in every transaction of a modern world. Publications related to biometrics are found in the literature from. K. Chang [7] presented an integrated multisensory recognition system using acoustic and visual features for person identification. Integration of multiple information was a key issue in implementation of a reliable system. The work was carried out using acoustic and visual features of a person for identification. The speaker and face recognition systems are decomposed into two and three single feature classifier respectively. The resulting five classifiers produce non homogenous scores which are combined using different approaches. The speaker recognition is based on vector quantization of the acoustic parameter space. Face recognition is based on comparison of facial features at the pixel level. Integration of information, using multiple classifiers is considered as learning task. M. P. Down [3] presented the standards for identifying biometric accuracy in 2003.In his work carried under NIST (National Institute of Standards and Technology) ,facial recognition vendors test, finger print verification etc., were conducted using image based biometrics. Standards were set and it is observed that not all subjects can easily finger printed and 2 to 3.5% have damaged friction ridges. This work gave a platform to indicate that a dual biometric system including more images may be needed to meet the existing system 18 requirement. U. M. Bubeck [2] presented a critical analysis of the measurement of accuracy and performance of a biometric system. He compared and criticized the current approaches stating that the performance under larger databases must be tested. He also stated that second source of uncertainty which effects the overall accuracy should be considered. It is also observed that accuracy depends on how information of the biometrics features is used, but not on the complexity of the design. The author also presents typical accuracy indices like symmetry, asymmetry, matching scores, false match rate, false non-match rate etc. P. Frnti [10] presented tools and techniques for biometric testing. The performance limitations that are nearly impossible to work around was analyzed and working towards the multiple biometrics for performance improvement was stated in his work. G. L. Marcialis [8] in his work presented the architecture of integration of face and finger prints where a case study was performed using score based recognition. The integration system retrieves first top five matches for face recognition. Finger print verification was applied to each of the resulting top five matches and final decision is made by decision fusion scheme. Experiments are conducted on a small database with 64 individuals and it is presented that an integration scenario gave better results compared to finger or face taken separately. A.Ross [9] provides the information of fusion scenario of biometric system. Biometric system limitations such as noisy sensor data, spoof attacks, inter class similarity and intra- class variations can be reduced using fusion of information. A.Ross [6] presented the fusion in the context of biometric as single biometric multiple samples, multi biometric samples, multi classifiers and multiple approaches. A.Ross presented that fusion in match score level increases the performance of the biometric system. Finger print and face data are obtained from 50 users with 5 samples each to generate 500 (50x10) genuine scores and are compared with 12250 imposters. Sum rule is used to find the weighted average of the final score. 20 Decision tree and linear discriminate methods were used to compare it with sum rule. User specific applications with widely acceptable biometric character selection were addressed. Ross and Jain gave an overview of multimodal biometrics presenting levels of fusion, fusion scenario using multiple sensors, multiple classifiers and multiple approaches. Integration strategies such as feature level fusion, match score level fusion, rank level fusion and decision level fusion are presented. Multimodal biometrics addresses several limitations of unimodal system. The performance of unimodal can be improved with the integration of multiple source of information. Researchers presented various papers related to application and implementation of multimodal biometric system in their work Some of the fusion level implementations contributed for biometric system Viola and Jones [12] has made face detection practically feasible to real world applications. The Viola and Jones face detector contains three main ideas that make it possible to build a successful 27 face detector that can run in real time .i.e., integral image classifier learning with Ada Boost and a cascade structure. Face detection methods in literature survey shows that they are classified into two groups, knowledge based and image based. Knowledge based methods uses facial features (like shape of face, eyes, eyebrows) template matching, skin colour etc. Many detection algorithms are based on facial features. HI-fang [13] detected face and facial features by extraction of skin like region with YCbCr colour space and edges are detected in skin like region. Eyes, mouth are found with geometrical information. Researchers proposed various approaches using skin colour to detect the face features. Human face detection using template matching was proposed by various researchers in the literature study.CHAPTER 3FACE RECOGNITION3.1Principal Component Analysis (PCA)Principal component analysis (PCA) was invented in 1901 by Karl Pearson. PCA is a variable reduction procedure and useful when obtained data have some redundancy. This will result into reduction of variables into smaller number of variables which are called Principal Components which will account for the most of the variance in the observed variable. Problems arise when we wish to perform recognition in a high-dimensional space. On the other hand, dimensionality reduction implies information loss. The best low-dimensional space can be determined by best principal components.The major advantage of PCA is using it in eigenface approach which helps in reducing the size of the database for recognition of a test images. The images are stored as their feature vectors in the database which are found out projecting each and every trained image to the set of Eigen faces obtained. PCA is applied on Eigen face approach to reduce the dimensionality of a large data set.3.2 Block Diagram of PCAProduce the column vectors from input imageCalculate the covariance matrixCalculate the Eigen values and Eigen vectorsNormalize the column vectorCalculate the weight values Figure 3.1 Block Diagram of PCA3.3 Eigen Face ApproachIt is adequate and e?cient method to be used in face recognition due to its simplicity, speed and learning capability. Eigen faces are a set of Eigen vectors used in the Computer Vision problem of human face recognition. They refer to an appearance-based approach to face recognition that seeks to capture the variation in a collection of face images and use this information to encode and compare images of individual faces in a holistic manner. The Eigen faces are Principal Components of a distribution of faces, or equivalently, the Eigen vectors of the covariance matrix of the set of the face images, where an image with N by N pixels is considered a point in N 2-dimensional space. Previous work on face recognition ignored the issue of face stimulus, assuming that prede?ned measurement were relevant and su?cient. This suggests that coding and decoding of face images may give information of face images emphasizing the signi?cance of features. These features may or may not be related to facial features such as eyes, nose, lips and hairs. We want to extract the relevant information in a face image, encode it e?ciently and compare one face encoding with a database of faces encoded similarly. A simple approach to extracting the information content in an image of a face is to somehow capture the variation in a collection of face images.We wish to ?nd Principal Components of the distribution of faces, or the Eigen vectors of the covariance matrix of the set of face images. Each image location contributes to each Eigen vector, so that we can display the Eigen vector as a sort of face. Each face image can be represented exactly in terms of linear combination of the Eigen faces. The number of possible Eigen faces is equal to the number of face image in the training set. The faces can also be approximated by using best Eigen face, those that have the largest Eigen values, and which therefore account for most variance between the set of face images. The primary reason for using fewer Eigen faces is computational e?ciency.3.4 Eigen Values and Eigen VectorsIn linear algebra, the eigenvectors of a linear operator are non-zero vectors which, when operated by the operator, result in a scalar multiple of them. Scalar is then called Eigen value (λ) associated with the eigenvector (X). Eigen vector is a vector that is scaled by linear transformation. It is a property of matrix. When a matrix acts on it, only the vector magnitude is changed not the direction. AX = λX, where A is a vector function.(A?λI)X = 0, where I is the identity matrix.This is a homogeneous system of equations and form fundamental linear algebra. We know a non-trivial solution exists if and only ifDet(A?λI) = 0, where det denotes determinant.When evaluated becomes a polynomial of degree n. This is called characteristic polynomial of A. If A is N by N then there are n solutions or n roots of the characteristic polynomial. Thus there are n Eigen values of A satisfying the equation.AXi = λiXi , where i = 1,2,3,.....nIf the Eigen values are all distinct, there are n associated linearly independent eigenvectors, whose directions are unique, which span an n dimensional Euclidean space.3.5 Face Image RepresentationTraining set of M images of size NxN are represented by vectors of size N2.Each face is represented by Γ1, Γ2, Γ3,……, ΓM. Feature vector of a face is stored in a N×N matrix. Now, this two-dimensional vector is changed to one dimensional vector.For Example:= Each face image is represented by the vector Γi.Γ1 = Γ2 = Γ3 = ................... ΓM= 3.6 Mean and Mean Centered ImagesAverage face image is calculated by Ψ = (1/M) + + + …………. + → Ψ = (Γ1 + Γ2 + Γ3 + ........... + ΓM)/MEach face di?ers from the average by Φi = Γi?Ψ which is called mean centered image.Φ1 = Φ2 = Φ3 = .................ΦM =3.7 Covariance MatrixA covariance matrix is constructed as: C = AAT, where A = [Φ1, Φ2, …..ΦM] of size N2 ×N2.A = AT =Size of covariance matrix will be N2 ×N2 (4x4 in this case). Eigen vectors corresponding to this covariance matrix is needed to be calculated, but that will be a tedious task therefore, For simplicity we calculate ATA which would be a 2×2 matrix in this case. ATA = size of this matrix is MxM. Consider the eigenvectors of ATA such thatATAXi = λiXiThe eigenvectors vi of ATA are X1 and X2 which are 2×1. Now multiplying the above equation with A both sides we getAATAXi = AλiXiAAT(AXi) = λi(AXi)Eigen vectors corresponding to AAT can now be easily calculated now with reduced dimensionality where AXi is the Eigen vector and λi is the Eigen value.3.8 Eigen Face SpaceThe Eigen vectors of the covariance matrix AAT are AXi which is denoted by Ui. Ui resembles facial images which look ghostly and are called Eigen faces. Eigen vectors correspond to each Eigen face in the face space and discard the faces for which Eigen values are zero thus reducing the Eigen face space to an extent. The Eigen faces are ranked according to their usefulness in characterizing the variation among the images.A face image can be projected into this face space by ?k = UT(Γk ?Ψ); k=1,....,M, where (ΓkΨ) is the mean centered image. Hence projection of each image can be obtained as ?1 for projection of image1 and ?2 for projection of image2 and hence forth.3.9 Euclidean DistanceThe test image, Γ, is projected into the face space to obtain a vector, ? as ? = UT(Γ?Ψ)The distance of ? to each face is called Euclidean distance and de?ned by(?k)2= ||???k||2; k = 1,…,M where ?k is a vector describing the kth face class. ?k is the Euclidean distance between test image and the kth trained image .A face is classi?ed as belonging to class k when the minimum ?k is below some chosen threshold Θc. otherwise the face is classi?ed as unknown. Θc, is half the largest distance between any two face images:Θc = (1/2)maxj,k||?j ??k||; j,k = 1,.....,M We have to ?nd the distance ? between the original test image Γ and its reconstructed image from the Eigen face Γf?2 = ||Γ?Γf||2, where Γf = U ?? + Ψ CHAPTER 4FINGERPRINT RECOGNITION4.1 Fingerprints and MinutiaeA fingerprint is a distinct pattern of ridges and valleys on the finger surface of an individual. A ridge is defined to be a single curved segment whereas a valley is the area between two adjacent ridges. So the dark areas of the fingerprint are called ridges and white area that exists between them is known as valleys.Figure 4.1 FingerprintIn case of a fingerprint identification system, the captured fingerprint image needs to be matched against the stored fingerprint templates of every user in the database. This involves a lot of computation and search overhead and thus we need a fingerprint classification system that will help us to severely restrict the size of the templates database. A fingerprint recognition system involves many processes and stages. The below Figure shows the general process to identify the fingerprint. The scopes for this chapter are shown by dashed box in the figure below.Fingerprint extraction consists of three main steps, and they arePreprocessing.Minutiae extraction.Post-processing.The pictorial representation of the various processes and their intermediate stages are depicted as shown below.Figure 4.2 Feature Extraction Stages4.2 PreprocessingThe steps that are present in almost every process areImage enhancementBinarizationThinning4.2.1 Image Enhancement According to Gonzalezand Shapiro median ?lter is performed as replac-ing a pixel with themedian value of the selected neighborhood. In particular, the median ?lter performs well at ?lteringoutlier points while leaving edgesintact.Figure 4.3 An Example of EnhancementFilter4.2.2 Binarization The separation of the object and background is known as Binarization. A gray scale picture is turned into a binary picture. A binary picture has two dissimilar values only. The colors black and white are represented by the values 0 and 1 respectively. A threshold value in the grayscale image is picked for binarization of an image. Everything darker than this threshold value is converted to black and everything lighter is converted to white. To find the identification marks in the fingerprints such as singularity points or minutiae, this method is performed.Figure 4.4 An Example of BinarizationThe complexity with binarization lies in finding the accurate threshold value to be able to eliminate insignificant information and improve the significant one. Finding a working global threshold value that can be used on every image is unfeasible. The deviations can be too huge in these types of fingerprint images that the background in one image can be darker than the print in another image. Therefore, algorithms to find the optimal value must be applied separate on each image to get a functional binarization. There are a number of algorithms to perform this; the simplest one uses the mean value or the median of the pixel values in the image. This algorithm is based on global thresholds. Nowadays local thresholds are often used. The image is separated into smaller parts and threshold values are then calculated for each of these parts. This enables adjustments that are not possible with global calculations. Local thresholds demand a lot more calculations but mostly compensate it with a better result.4.2.3 ThinningOne way to make a skeleton is through thinning algorithms. The technique takes a binary image of a fingerprint and makes the ridges that appear in the print just one pixel wide without changing the overall pattern and leaving gaps in the ridges creating a sort of “skeleton” of the image. The form is used as structural element, consisting of five blocks and each block representing a pixel. The center pixel of that element is called the origin. When the structural element overlays the object pixels in its entirety, only the pixels of the origin remain. The others are deleted. This Figure shows the example of thinning process. Thinning makes it easier to find minutiae and removes a lot of redundant data.Figure 4.5 An Example of Thinning4.3 Minutiae PointsMinutiae points are the major features of a fingerprint image and are used in the matching of fingerprints. Figure 4.6 Minutiae PointsThese minutiae points are used to determine the uniqueness of a fingerprint image. A good quality fingerprint image can have 25 to 80 minutiae depending on the fingerprint scanner resolution and the placement of finger on the sensor.Minutiae can be defined as the points where the ridge lines end or fork. So, the minutiae points are the local ridge discontinuities and can be of many types. These types are Ridge ending?is the point where the ridge ends suddenly.Ridge bifurcation?is the point where a single ridge branches out into two or more ridges.Ridge dots?are very small ridges.Ridge islands?are slightly longer than dots and occupy a middle space between two diverging ridges.Ponds or Lakes?are the empty space between two diverging ridges.Spurs?is a notch protruding from a ridge.Bridges?are the small ridges that join two longer adjacent ridges.Crossovers?are formed when two ridges cross each other.Ridge endings and ridge bifurcations are the most commonly used minutia types since all other types of minutiae are based on a combination of these two types. Figure below shows some of the common minutiae patterns.4.4 Minutiae Extraction – A Part of AFIS The purpose of minutiae extraction is to identify the diplomat features called minutiae and to extract them from the input fingerprint images. It is very complicated to opt for the prominent and accurate demonstration of the input images for AFIS. This demonstration must incorporate the following properties (Hong et al 2008). (i) Retain the discriminating power of raw digital fingerprint images (ii) Compactness (iii) Amenable to matching algorithms (iv) Robust to noise and distortions and (v) Easy to computeThe first property says that the individuality of fingerprints should be maintained by demonstration, i.e. the demonstration can be established by the identity alone. The second property insists that the demonstration should be represented concisely and clearly. In the third property, it is given that the demonstration should be appropriate for a matching algorithm. The fourth property postulates that the demonstration should be strong enough to tolerate noise and distortions, i.e. it represents the quality of fingerprint images. The final property reveals that the demonstration should not be too complex in computation.4.4.1 Minutiae ExtractionA valid demonstration of a fingerprint is the pattern of the minutiae details of the fingerprint. It satisfies the basic properties like compactness, agreeable to matching algorithms, robust to noise and distortions and is easy to compute (Ravi et al 2009). A total of 150 diverse local ridge characteristics, called minutiae details, have been identified. The seven most prominent ridge characteristics are shown in Figure.Figure 4.7 Minutiae DetailsOnly two most prominent types of minutiae details are used in AFIS because of their stability and robustness. They are ridge endings and bifurcations. Ridge endings are the points where the ridge curve terminates, and bifurcations are where a ridge splits from a single path to two paths at a Y-junction (Amengual et al 1997). Figure 4.10 illustrates an example of a ridge ending and a bifurcation. In this example, the black pixels correspond to the ridges, and the white pixels correspond to the valleys. Ridge ending Ridge bifurcation Figure 4.8 Ridge Ending and Ridge BifurcationAccurate minutiae detection is an essential component for all minutiae-based fingerprint recognition systems. Without accurate minutiae detection, the results and performance of a system are not reliable.Most fingerprint minutia extraction methods are thinning based where the skeletonization process converts each ridge to one pixel wide. Minutia points are detected by locating the end points and bifurcation points on the thinned ridge skeleton based on the number of neighboring pixels. The end points are selected if they have a single neighbor and the bifurcation points are selected if they have more than two neighbors. This category focuses on a binary image-based technique of minutiae extraction without a thinning process. In fact, a lot of spurious minutiae are observed because of undesired spikes, breaks, and holes. Therefore, post processing is usually adopted to avoid spurious minutiae, which is based on both statistical and structural information after feature detection. 4.5 Minutiae Extraction Using Crossing Number ConceptA minutiae extraction algorithm is said to be a good one if and only if it satisfies the following requirements. The first and foremost thing comprises the non-creation of the spurious minutiae. Next, the genuine minutiae should not be missed. And finally, it should be accurate in localization of the minutiae portion and computation of minutiae orientation (Shi et al 2006). And thus, a reliable and efficient minutiae extraction algorithm is defined.4.5.1 Crossing-Number ConceptThe Crossing-Number (CN) concept is the most commonly employed technique of minutiae extraction. The skeleton image where the ridge flow pattern is eight-connected is used in this method. By scanning the local neighborhood of each ridge pixel in the image using a3x3 window, the minutiae are extracted. The value of CN, which is defined as half the sum of the differences between pairs of adjacent pixels in the eight neighborhoods, is then computed. The ridge pixel can then be classified as a ridge ending, bifurcation or non-minutiae point, using the properties of the CN as shown in Table. For example, a ridge pixel with a CN of one corresponds to a ridge ending, a CN of two corresponds to a connective point and a CN of three corresponds to a bifurcation. The minutiae are extracted by scanning the local neighborhood of each ridge pixel in the image using a 3X3 window.The majority of the proposed approaches for image post processing in literature are based on a series of structural rules used to eliminate spurious minutiae. For example, a ridge ending point that is connected to a bifurcation point, and is below a certain threshold distance is eliminated. However, rather than employing a different set +of heuristics each time to eliminate a specific type of false minutiae, some approaches incorporate the validation of different types of minutiae into a single Property0Isolated Point1Ending Point2Connective Point3Bifurcation4Crossing PointTable 4.1 Properties of the Crossing Number4.5.2 MethodologyTo extract the minutiae points, the Crossing Number (CN) method is used. By examining the local neighborhood of each ridge pixel using a 3x3 window, this method extracts the ridge endings and bifurcations from the skeleton image. The concept of Crossing Number (CN) is widely used for extracting the minutiae (Jain et al., 1997). The crossing number for a pixel P iswhere, Pi is the binary pixel value in the neighborhood of P with Pi = (0 or1) and P9 = P1. For a pixel P, its eight neighboring pixels are scanned in an anticlockwise direction as follows:Then the pixels are classified according to the property of their CN value. As shown in below Figure, a ridge pixel with a CN of one corresponds to a ridge ending, and a CN of three corresponds to a bifurcation. Termination Minutiae Bifurcation MinutiaeFigure 4.9 Examples of a Ridge Ending and Bifurcation PixelDepending upon the finger and the sensor area, the measured fingerprint consists of an average of about thirty to sixty minutiae points. After performing the image processing step, these can be extracted from the image. The point at which a ridge ends, and the point where a bifurcation begins are the most basic minutiae and are used in most applications. After obtaining the thinned ridge map, the ridge pixels with three ridge pixel neighbors are identified as ridge bifurcations, and those with one ridge pixel neighbor are identified as ridge endings.The absolute position (x,y), the direction, and if necessary, the scale(s) are stored for each and every extracted minutia. The position of the minutiae are generally indicated by the distance from the core, with the core serving as the (0,0) on an x,y-axis. Some authors use the far left and bottom boundaries of the image as the axes, correcting for misplacement by locating and adjusting from the core. The angle of the minutia is normally used in addition to the placement of the minutia. When a ridge ends, its direction at the point of termination establishes the angle. This angle is taken from a horizontal line extending rightward from the core.4.7 Post-ProcessingA post-processed image is the starting point of minutiae extraction. Though it is a much-defined image, it will have deformations and forged minutiae that required to be filtered out. Since minutiae are very rarely adjacent, an algorithm may abolish one of two adjacent minutiae. Scars, sweat or dirt may cause irregular minutiae that appear as false minutiae when acquiring the fingerprint image. Algorithms should trace any points or patterns that do not produce sense, such as a spur on an island which maybe a false minutia. In addition to that, they have to identify a ridge crossing at right angles to two or three others that maybe a scar or dirt. By this post-processing stage, an outsized proportion of false minutiae are abandoned.Figure shown below is an evidence for several examples of false minutiae. In clockwise order, interrupted ridges, forks, spurs, structure ladders, triangles and bridges are portrayed in the figure. Two very close lines with the same direction create the interrupted ridges. A fork is produced by two lines connected by a noisy line (Greenberg et al 2000). Figure 4.10 False MinutiaeInterrupted Ridges, Forks, Spurs, Structure Ladders, Triangles and Bridges in Clockwise OrderAt last, a noisy line between two ridges constitutes the bridge. By means of all these characteristics, a number of false minutiae are generated. In the algorithm first, the spurs are being eliminated, the endpoints are being merged, the bridges are being excluded, the triangles are being eradicated and the ladder structures are being abolished. Thus, the algorithm is arranged in an order and is executed in that sequence to remove the several false minutiae. Figure 10 denotes a sample post-processing image where its false minutiae are removed. Thinned Image with Spur Thinned Image after Removal of SpurFigure 4.11 Impact of Removing SpursCHAPTER 5NORMALIZATION5.1 Min–Max normalizationThe simplest normalization technique is the Min–max normalization. Min–max normalization is best suited for the case where the bounds (maximum and minimum values) of the scores produced by a matcher are known. In this case, we can easily shift the minimum and maximum scores to 0 and 1, respectively. However, even if the matching scores are not bounded.Given a set of matching scores{Sk}, k=1,2,...,n, the normalized scores are given bysk'= sk-minmax-minminimum and maximum values are estimated from the given set of matching scores, this method is not robust (i.e., the method is highly sensitive to outliers in the data used for estimation). Figure 5.1 Distribution of Genuine and Impostor Scores after Min–Max NormalizationMin–max normalization retains the original distribution of scores except for a scaling factor and transforms all the scores into a common range [0,1]. Distance scores can be transformed into similarity scores by subtracting the min–max normalized score from.5.2 Decimal scalingDecimal scaling can be applied when the scores of different matchers are on a logarithmic scale. For example, if one matcher has scores in the range [0,1] and the other has scores in the range [0,1000],the following normalization could be applied.sk'= sk10n where n=log10max(Si). The problems with this approach are lack of robustness and the assumption that the scores of different matchers vary by a logarithmic factor.5.3 Z-ScoreThe most commonly used score normalization technique is the z-score that is calculated using the arithmetic mean and standard deviation of the given data. If we do not have any prior knowledge about the nature of the matching algorithm, then we need to estimate the mean and standard deviation of the scores from output of the matchers may follow different statistical distributions. The normalized scores are given bysk'= sk-μσwhere σ is the arithmetic mean and u is the standard deviation of the given data. However, both mean and standard deviation are sensitive to outliers and, hence, this method is not robust. Z-score normalization does not guarantee a common numerical range for the normalized scores of the different matchers. If the input scores are not Gaussian distributed, z-score normalization does not retain the input distribution at the output. This is due to the fact that mean and standard deviation are the optimal location and scale parameters only for a Gaussian distribution.5.4 Median and Median Absolute Deviation (MAD)The median and median absolute deviation (MAD) are insensitive to outliers and the points in the extreme tails of the distribution. Hence, a normalization scheme using median and MAD would be robust and is given bysk '= sk-medianMADwhere MAD=median(|Sk?median|). However, the median and the MAD estimators have a low ef?ciency compared to the mean and the standard deviation estimators, i.e., when the score distribution is not Gaussian, median and MAD are poor estimates of the location and scale parameters. Therefore, this normalization technique does not retain the input distribution and doesnot transform the scores into a common numerical range. 5.5 Double sigmoid normalizationThe normalized score is given bysk' = 11+exp?(-2(sk-t)/r1) if sk<t11+exp?(-2(sk-t)/r2) otherwise Where t is the reference operating point and r1 and r2 denote the left and right edges of the region in which the function is linear, i.e., the double sigmoid function exhibits linear characteristics in the interval (t ?r1,t?r2). Fig. 5 shows an example of the double sigmoid normalization, where the scores in the [0,300] range are mapped to the [0,1] range using t =200, r1 =20 and r2 =30. This scheme transforms the scores into the [0,1] interval. But, it requires careful tuning of the parameters t,r1,r2 to obtain good ef?ciency. Generally, t is chosen to be some value falling in the region of overlap between the genuine and impostor score distribution, andr1 and r2 are made equal to the extent of overlap between the two distributions toward the left and right of t, respectively. This normalization scheme provides a linear transformation of the scores in the region of overlap, while the scores outside this region are transformed non-linearly. The double sigmoid normalization is very similar to the min–max normalization followed by the application of two-quadrics (QQ) or logistic (LG) function as suggested by Snelick et al. When r1 and r2 are large, the double sigmoid normalization closely resembles the QQ-min–max normalization. On the other hand, we can make the double sigmoid normalization tend toward LGmin–max normalization by assigning small values to r1 and r2. The face scores are converted into similarity scores by subtracting the normalized scores from 1. The parameters of the double sigmoid normalization were chosen as follows: t is chosen to be the center of the overlapping region between the genuine and impostor score distribution, and r1 and r2 are made equal to the extent of overlap between the two distributions toward the left and right of the center, respectively. A matching score that is equally likely to be from a genuine user and an impostor is chosen as the center (t) of the region of overlap. Then r1 is the difference between t and the minimum of genuine scores, while r2 is the difference between the maximum of impostor scores and t.5.6 Tanh - estimatorsThe tanh-estimators introduced by Hampel et al. [30] are robust and highly ef?cient. The normalization is given bysk'= 1 2tanh0.01sk-μGHσGH+1Where ?GH and ?GH are the mean and standard deviation estimates, respectively, of the genuine score distribution as given by Hampel estimators.1 Hampel estimators are based on the following in?uence (Ω)-function:φU= u 0≤u<aasignu a≤u<basignuc-uc-b b≤u<c 0 u≥cThe Hampel in?uence function reduces the in?uence of the points at the tails of the distribution (identi?ed by a, b, and c) during the estimation of the location and scale parameters. Hence, this method is not sensitive to outliers. If the in?uence of a large number of tail-points is reduced, the estimate is more robust but not ef?cient (optimal). On the other hand, if many tail-points in?uence the estimate, the estimate is not robust but the ef?ciency increases. Therefore, the parameters a, b, and c must be carefully chosen depending on the amount of robustness required which in turn depends on the estimate of the amount of noise in the available training data. In our experiments, the values of a, b and c were chosen such that 70% of the scores were in the interval (m?a,m+ a), 85% of the scores were in the interval (m?b, m+b), and 95% of the scores were in the interval (m?c, m+c), where ‘m’ is the median score. The distributions of the scores of the three modalities after tanh normalization. The distance to similarity transformation is achieved by subtracting the normalized scores from 1. The nature of the tanh distribution is such that the genuine score distribution in the transformed domain has a mean of 0.5 and a standard deviation of approximately 0.01. The constant 0.01 in the expression for tanh normalization determines the spread of the normalized genuine scores. In our experiments, the standard deviation of genuine scores of face, ?ngerprint and hand-geometry modalities are 16.7, 202.1,CHAPTER 6FUSIONMultimodal biometric system fusion techniques refer to how the information is fused when obtained from different biometric modalities. Fusion plays an important role in improving the overall recognition rate of a multi-biometric system.The fusion is further categorized biometrics fusion methods into serial and parallel modes and describes work on biometrics fusion from three different structural levels. They included feature level, measurement level and decision level fusion respectively.6.1 Serial and Parallel Fusion ModeFusion can be done in serial or parallel. These modes are also referred to as hierarchical fusion and holistic fusion respectively. The figure shown below illustrates these fusion modes in schematic diagrams. Serial fusion authenticates a person through sequentially assessing the claimant’s biometrics as shown below. Individual biometrics module generates a decision and passes it to the next biometrics module or simply terminates the process if a reliable decision is obtained. In this mode, the information is not really “fused” but each biometrics acting as a filter. As depicted in figure, in contrast to the serial mode, all biometrics outputs are combined simultaneously using a fusion algorithm in parallel mode.Biometrics 2Biometrics 1Biometrics 1(a)Fusion AlgorithmBiometrics 2(b)Figure 6.1 Biometrics Fusion Modes (a) Serial Fusion Mode (b) Parallel Fusion Mode.According to the biometrics processing stage, parallel fusion methods are broadly classified into feature level, measurement level and decision level fusion. 6.2 Different Levels of FusionThe context of verification, to accept or to reject a claimant referring to the biometrics is a process of information reduction.Figure 6.2 Levels of FusionCombination at earlier stage is desired because of the richer available information. However, to combine the biometrics at the feature level is difficult especially when the features are different (e.g. to combine the minutiae of the fingerprint to the eigenface coefficient.) Although it is much easier to do the fusion at the decision level, because only one-bit information is involved, this information is too limited for a significant fusion improvement. All biometrics output matching score containing more useful information than a binary decision. As a result, the score level biometrics fusion is more popular than the two others in the field of fusion research.Sensor Level Fusion: Fusion at the sensor-level is not preferable in view of the large amount of redundant information contained at this level. Sensor level fusion is applicable only if the multiple sources represent samples of the single biometric trait obtained either using a single sensor or different compatible sensors.Feature Extraction Level Fusion: Feature level fusion is achieved by combining different feature sets extracted from multiple biometric sources. Feature sets could be either homogeneous or heterogeneous. The new concatenated feature vector developed has higher dimensions. Further, feature reduction techniques could be applied on large feature set so as to obtain meaningful feature set. It is assumed that this feature extraction level fusion performs better than other fusion techniques.Match score Level Fusion: Fusion at the score level is generally preferred due to the ease in accessing and combining the matching scores provided by the individual matchers. Note that at the score-level, sufficient amount of information is available to identify an individual, since it has neither too much redundant nor too little information. This fusion scheme provides a matching score which indicates better proximity of feature vector with the template. The scores can be combined to show the conformity of claimed user identity.Decision Level Fusion: In this fusion scheme the information is captured from various biometric modalities and the resulting feature vector is classified into two main classes i.e. reject or accept. This fusion level technique is quite rigid because of availability of limited information.Fusion at decision level is too rigid, since only a limited amount of information is available at this level.6.3 Score Level FusionMeasurement level is the most popular biometrics fusion level. Biometrics generates confidence value or score to authenticate a person. Such information is homogenous and accessible; therefore, majority of biometrics fusion research concentrates on this type of fusion. It can be further subdivided into three different types: (a) Rule based fusion (b) Classification based fusion (c) Density based fusion.6.3.1 Rule Based FusionRule based fusion combines the biometrics score by using a fixed rule, e.g. Sum, Min or Max rules. The main advantages of such combination are that there is no training session required, and the method is very efficient in processing time and conceptually simple. However, each biometrics module might have different measurement scale. For the biometrics to be effectively combined using a fusion rule, score normalization presents the greatest challenge.6.3.2 Classification Based FusionThe scores from different biometrics sources can be treated as feature vectors. The fusion therefore is viewed as a classification problem. A classifier is used to construct a separation boundary between the genuine user and impostor in a verification system. The classifier used for this purpose includes K Nearest Neighbors, Decision Trees, Neural Networks, Support Vector Machine and Logistic Regression.Logistic Regression is evaluated as one of the most effective score level fusion techniques among three different rule based fusion categories.6.3.3 Density Based FusionDensity based fusion first transforms the scores of biometrics into probability densities. These probabilities can then easily be combined using the product rule. Unlike the scores used in rule-based fusion, these densities can be applied directly without normalization. Furthermore, provided that the underlying densities are known, the optimal fusion performance is directly achieved. Since this method is probability based, additional information (e.g. the probability-based quality) that aids the fusion process can also be incorporated without having to modify the fusion algorithm. Some individuals might not possess certain biometrics or its measurements are not reliable. This causes the non-density based fusion algorithm cannot be applied because of not sufficient input is provided. This missing data problem can also be easily solved in this fusion method.6.4 Sum rule (SUM)In this method K modalities biometrics scores are added after the scores are normalized. The Equal Weighted Sum rule is the most popular Weighted Sum rule. It is as shown below:S=i=0KSiThe Sum Rule is one of the effective score level fusion approaches for biometric score level fusion research. Although it is a very simple algorithm, it outperforms some of the more complex fusion methods. Weighting is used in the Sum rule to indicate the importance of each modality in the fusion. There are generally two different weighting schemes. The first one is to apply a same weight to all the scores generated by the same biometric matcher. This is equivalent to adjusting the separation boundary’s gradient (e.g. using the biometric matcher performance measure as weighting parameter). Another is to apply different weight to different users accordingly even the scores are generated by the same biometric matcher. This is equivalent to adjusting the score vector’s position (e.g. using individual biometrics quality measure as weighting parameter).6.5 Max rule (MAX)In this method the maximum score among the multimodal biometrics scores is chosen as the fusion score. For effective comparison, raw biometrics score has to be normalized in advance.S=max (Si)6.6 Support Vector Machine (SVM)In this method by viewing multimodal biometric scores as a set of vectors in n-dimensional score space, SVM constructs a separating hyperplane. Such hyperplane is constructed so that the distance from this hyperplane to the nearest data points (the support vectors) on both sides is maximized. Vapnik proved that maximizing of this distance minimizes the generalized classification error. For non-separable samples, a kernel function can be used to project the samples to a higher dimensional score space and to construct the separation hyperplane in that space. Some commonly used kernel functions are Polynomial function, Radial Basis Function and Hyperbolic Tangent Function. Vapnik further suggests a soft margin to allow the existence of mislabeled samples. By doing this the samples can be classified as less error as possible while the maximum distance between the separating hyperplane and nearest support vectors (parallel hyperplanes) can be maintained. In this work, a linear kernel function is used and the separating hyperplane is found using the Sequential Minimal Optimization method (SMO). To make this method to be threshold adjustable, it was modified to produce a fusion score based on the proximity of the test sample to the separating hyperplane.1600200208915Fig 6.3 Support Vectors Machine Schematic Diagram.6.7 Logistic Regression (LREG)The variable zis called logit. It is a measure of the total contribution of different biometrics scores based on the training samples. Their contributions are weighted by the regression coefficients. f(z)transforms these combined scores into the probability values between 0 and1.f(z)=11+e-zz ??0??1Si ,1 ??2 Si,2?????????????KSi,K2228850213360Fig 6.4 Logistic Function Used in Logistic Regression Analysis.For the expression of z, β0is the intercept and β1, β2, …, βKare the regression coefficients. These parameters are estimated from the training samples by using the Maximum Likelihood Estimation algorithm (MLE), which is based on the Iteratively Re-weighted Least Squares method (IRLS).6.8 Likelihood Ratio Based Fusion (JLLR)The term fg(x)/fi(x)is referred to as the likelihood ratio. The logarithm of this likelihood ratio is taken as the fusion score Sfi. In this method, the multimodal biometrics scores joint densities, f (Si,1, Si,2, …, Si,K) of the impostor and genuine user are estimated using GMM whereas the component numbers are determined by a fitting algorithm.Sfi=log?fg(Si,1,Si,2,…..,Si,k)fi(Si,1,Si,2,…..,Si,k)6.9 Product of Likelihood Ratio Fusion (MLLR)In contrast to the JLLR algorithm which uses the joint densities, here the marginal density f(Si,k) of each biometrics is modeled. The likelihood ratio of each matcher is then multiplied and the logarithm of the product is used as the fusion score.Sfi=logj=1Kfg(Sj,k)fi(Sj,k)Again, GMM and the fitting algorithm are used to estimate the marginal densities.CHAPTER 7RESULTSFace Input:Mean Face:Normalized Face:Fingerprint Input:Enhanced Image:Binarized Image:Thinned Image:Minutiae Points:Euclidean distance between Test face and Trained face:Normalized Scores of Face:Matching Scores of Fingerprint:Fused Scores of Face and Fingerprint:Output :When test images are Genuine:When test images are Imposters:CHAPTER 8CONCLUSIONA biometric system which is based only on a single biometric characteristic may not always be able to achieve the desired performance. A multimodal biometrics technique, which combines multiple biometrics in making an identification, can be used to overcome the limitations. Integration of multimodal biometrics for an identification system has main goal: To improve the identification accuracy. We have developed a matching fusion scheme which integrates two different biometrics (face and fingerprint). In this scheme, biometrics that is suitable for database retrieval is used to index the template database and biometrics that is reliable in deterring impostors is used to ensure the overall system accuracy. In addition, a matching fusion scheme which fuses the scores made by each individual biometrics is used to make a more reliable decision. In order to demonstrate the efficiency of our matching fusion scheme, we have developed a prototype multimodal biometric system which integrates faces and fingerprints in making a personal identification. This system overcomes many of the limitations of both face recognition systems and fingerprint verification systems. Experimental results demonstrate that our system performs very well. It meets the accuracy requirements.REFERENCES[1] Y. Wang, "The theoretical framework of cognitive informatics", Int. J. Cognit. Informat. Nat. Intel., vol. 1, no. 1, pp. 10-22, 2007.[2] U. M. Bubeck, D. Sanchez, Biometric authentication: Technology and evaluation, 2003.[3] M. P. Down, R. J. Sands, "Biometrics: An overview of the technology challenges and control considerations", Inf. Syst. Control J., vol. 4, pp. 53-56, 2004. [4] A. Ross, K. Nandakumar, A. K. Jain, Handbook of Multibiometrics, New York:Springer-Verlag, 2006. [5] A. K. Jain, A. Ross, "Fingerprint mosaicking", Proc. IEEE Int. Conf. Acoust. Speech Signal Process., vol. 4, pp. 4064-4067, 2002.[6] A. Ross, R. Govindarajan, "Feature level fusion using hand and face biometrics", Proc. SPIE 2nd Conf. Biometric Technol. Human Identification, pp. 196-204, 2005.[7] K. Chang, K. W. Bower, S. Sarkar, B. Victor, "Comparison and combination of ear and face images in appearance-based biometrics", IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 9, pp. 1160-1165, Sep. 2003.[8] G. L. Marcialis, F. Roli, "Fingerprint verification by fusion of optical and capacitive sensors", Pattern Recogn. Lett., vol. 25, no. 11, pp. 1315-1322, Aug. 2004.[9] A. Ross, A. K. Jain, "Information fusion in biometrics", Pattern Recogn. Lett., vol. 24, no. 13, pp. 2115-2125, Sep. 2003. [10] T. Kinnunen, V. Hautamki, P. Frnti, "Fusion of spectral feature sets for accurate speaker identification", Proc. 9th Conf. Speech Comput., pp. 361-365, 2004.[11] Minutiae Based Extraction in Fingerprint Recognition[12] Fingerprint Minutiae Extraction and Orientation Detection[13] An efficient algorithm for fingerprint preprocessing and feature extraction ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download