In this paper we present an analysis of face recognition system with a combination of Neural networks withSub-space method of feature extraction. Here we are considering both single layer such as Generalized Regression neural network (GRNN) and Multi layer such as Learning Vector quantization (LVQ). The analysis of these neural networks are done between feature vectors with respect to the recognition performance of subspace methods, namely Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA).Subspace is a multiplex embedded in a higher dimensional vector space and extracting important features from the damn dimensionality. The experiments were performed using standard ORL, Yale and FERET database. From the …show more content…
In section IV, Experimental results are discussed and analysis is briefed. Finally, Conclusions are drawn.
II. PROPOSED METHOD
In this section an overview of different subspaces methods such as PCA and FLDA are describes in detail.
A. Principal Component Analysis
PCA is a classical feature extraction and data representation technique also known as Karhunen-Loeve Expansion [20, 21]. It is a linear method that projects the high-dimensional data onto a lower dimensional space. It seeks a weight projection that best represents the data, which is called principal components. Fig.1 Schematic illustration of PCA
Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal projection of the data points (red dots) onto this subspace maximizes the variance of the projected points (green dots). An alternative definition of PCA is based on minimizing the sum-of-squares of the projection errors, indicated by the blue lines as described in figure 1.
PCA is described as let a face image be A(x, y) be a two-dimensional N by Narray. The training set images are mapped onto a collection of vector points in this huge space, these vector points are represented as subspace. These vector points are the eigen vectors which is obtained from the covariance matrix which defines the subspace of face images.Let the
The first and most obvious evidence supporting face specific perception in the FFA is prosopagnosia, which is the inability to recognize familiar faces due to brain damage.
In the present contemporary era, facial recognition technologies are being installed by the companies in an extensive sense that surely reflects a continuum of growing hi-tech superiority and complexity. At the most ordinary level, facial detection is done by this technology which means that a photo is just detected and located for a face ("Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies," 2012).
Firstly, structural encoding includes view-centred descriptions and expressionindependent descriptions. View-centred descriptions derive from visual input and provide information for expression analysis, facial speech analysis, and directed visual processing. To recognize an individual by the face, view-centred descriptions have to translate into expressionindependent descriptions those in turn active face recognition units. Some people with prosopagnosia may be due to impaired structural encoding for faces, and they show failure in perceptual face processing tasks. In fact, they are unable to make any sense of faces (eg. age or gender) and judge whether two faces are same.
PCA is the brain child of a dedicated aesthetician working with dermatologist to develop skincare products that really work. PCA's Perfecting Neck & Décolleté is a unique formula developed specifically to address the delicate skin of the neck and chest.
In the data set, there is lack of redundancy of data given the orthogonal components and it enhances the efficiency on the processes that are taking place in the smaller dimension.
This PCA image is more informative as it has high variance. But principal component 5 contains more information than principal component 3.So this is not always true that high variance image always contains more spatial information than low variance image. To address this problem QMI is applied between class labels and PCA image so that we
The face recognition model developed by Bruce and Young has eight key parts and it suggests how we process familiar and unfamiliar faces, including facial expressions. The diagram below shows how these parts are interconnected. Structural encoding is where facial features and expressions are encoded. This information is translated at the same time, down two different pathways, to various units. One being expression analysis, where the emotional state of the person is shown by facial features. By using facial speech analysis we can process auditory information. This was shown by McGurk (1976) who created two video clips, one with lip movements indicating 'Ba' and other indicating 'Fa'. Both clips had the sound 'Ba' played over the clip.
Face recognition is another biometric technology. Face recognition uses the same technology that iris recognition uses. For face recognition, a camera takes several images of a person to find out who it is. Face recognition differs from all of the other biometric technologies because the person that is in the picture does not have to cooperate with the process. In all of the other technologies it requires the people to actively participate in the process. In face recognition, the image can be taken without the person even knowing that it took place.
Through this routine of advanced technology analysis, it has been established to increase the results and have hastened the procedure of identifying suspects of crimes. Facial recognition is also necessary for public involvement and observation as it also aids law enforcement officials to more easily zone in on possible suspects of a crimes being caught. With the use of facial recognition, it constantly has been proven quite an effective method with the incorporation of this technique.
This essay will talk about face recognition and several reasons why it has been studied separately. The ability to recognise faces is of huge significance of people’s daily life and differs in important ways from other forms of object recognition (Bruce and Young, 1986). Than this essay will talk about the processes involved in face recognition which comes from the diversity of research about familiar and unfamiliar faces-it includes behavioural studies, studies on brain-damaged patients, and neuroimaging studies. Finally, it will discuss how face recognition differs from the recognition of other object by involving more holistic or configuration processing and different areas of the brain (Eysenck & Keane, 2005).
· Q factor analysis: Correlation matrix of individual variables based on their characteristics. Condenses large number of people into distinctly different groups.
We have made a scanner in which it scans the face and looks your facial features. Depending on how many features match with the picture obtained you will be granted access to the area you wish to go to. Facial Recognition is a kind of security measure that will be used in the future in order to grant access into places or computer programs. Facial Recognition is and will not only be used to access places or computer but it is also currently being used to gather demographic data on crowds. In some cases facial recognition is already being used to open banking accounts.
Bruce and Young’s theory of recognition tells us that human’s extract several kinds of information from faces; and that there are eight different components of such information. Such as structural encoding, expression analysis, facial speech analysis, directed visual processing, face recognition nodes,
Therefore, the original image space is highly redundant, and sample vectors could be projected to a low dimensional subspace when only the face pattern are of interest. A variety of subspace analysis methods, such as Eigen Face~\cite{turk1991eigenfaces}, Fisherface~\cite{belhumeur1997eigenfaces}, and Bayesian method~\cite{moghaddam2000bayesian}, have been widely used for solving these problems. One of the most useful methods is Mutual Subspace Method (MSM)~\cite{yamaguchi1998face}.
To construct an optimal hyperplane, SVM employs an iterative training algorithm, which is used to minimize an error function. According to the form of the error function, SVM models can be classified into four distinct groups: