You are on page 1of 50

FACE RECOGNITION USING LAPLACIANFACES

A PROJECT REPORT

Submitted by
PRINCE KUMAR
(15- CSE 3074)

In partial fulfillment for the award of the degree of

BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE ENGINEERING

DEPARTMENT OF COMPUTER SCIENCE ENGINEERING


Rawal Institute of Engineering and technology
Sohna Road, Near Zakopur, Faridabad
CERTIFICATE

TO WHOM IT MAY CONCERN

I hereby certify that “PRINCE KUMAR” Roll No 15 CSE 3074 of Rawal


Institute of Engineering and Technology, Faridabad, has undergone six month
industrial training from 12 JAN 2019 to 10 APRIL2019 at our organization to
fulfill requirements for the award of degree of B.Tech (Computer
engineering).He work of “ FACE RECOGNITION USING
LAPLACIANFACES” project during the training under . During his tenure
with us we found him sincere and hard working. Wishing him a great success
in the future

MR ANKUSH GOYAL
HOD,Department of Computer Science
Rawal Institute of Engineering and Technology
ACKNOWLEDGEMENT
An endeavor over a period can be successful only with right guidance and mentorship. I
take this opportunity to express my gratitude to all those who encouraged me for
successful completion of Project Training.

I am truly grateful to the entire staff of M/R DEEPAK KUMAR (HCL) who have helped
me in understanding each of the software activity and project’s detail and have been so
generous in sharing their years of experience with a novice like me.

I owe warm hearted acknowledgement of gratitude to Mr.kuldeeep from whom I have


been able to learn so much about the project and each of the software activity
comprehensively and have added depth and perspective to my understanding of the
project.

Prince kumar

15 CSE 3074
ABSTRACT

We propose an appearance-based face recognition method called the Laplacianface approach. By

using Locality Preserving Projections (LPP), the face images are mapped into a face subspace for analysis.

Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively

see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and

obtains a face subspace that best detects the essential face manifold structure.

The Laplacianfaces are the optimal linear approximations to the eigen functions of the Laplace

Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting,

facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP

can obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and

Fisher face methods on three different face data sets. Experimental results suggest that the proposed Laplacianface

approach provides a better representation and achieves lower error rates in face recognition.
TABLE OF CONTENT

CHAPTER NO TITLE PAGENO

ABSTRACT iv
LIST OF TABLES v
LIST OF FIGURES vi
LIST OF ABREVATIONS vii
1. INTODUCTION 1
1.1 Problem Definition 3
1.2 System Environment 3

2. SYSTEM ANALYSIS 5

2.1 Existing System 5


2.1.1 Principle Component Analysis 5
2.1.2 Linear Discriminant Analysis (LDA) 6
2.2 Proposed System 7
2.3 System Requirement 11
2.4 System Analysis Method 12
2.5 Feasibility Study 12
2.5.1 Technical Analysis 13
2.5.2 Economic Analysis 13
2.5.3 Performance Analysis 13
2.5.4 Efficiency Analysis 14

3. SYSTEM DESIGN 15
3.1 Project Module. 15
3.1.1 Read\Write Module. 15
3.1.2 Resizing Module. 16
3.1.3 Image Manipulation. 16
3.1.4 Testing Module. 16
3.2 System Development 18

4. IMPLEMENTATION 19

4.1 Implementation Details 19


4.2.1 Form Design 19
4.2.2 Input Design 21
4.2.3 Menu Design 21
4.2.4 Database Design 21
4.2.5 Code Design 21
4.2 Coding 23

5. SYSTEM TESTING 25

5.1 Software Testing 25


5.2 Efficiency of Laplacian Algorithm 27
5.2.1. Experimental Results 30
5.2.2 Face Recognition Using 30
Laplacianfaces
5.2.3 Yale Database 31
5.2.4 PIE Database 33
5.2.5 MSRA Database 34
6. CONCLUSION 36

8 SNOPSHOT 37

9 REFERENCES 44
LIST OF TABLES

TABLE NO TITLE PAGE NO

1 Performance Comparison on the 32


Yale Database

2 Performance Comparison on the 33


PIE Database

3 Performance Comparison on the 35


MSRA Database
LIST OF FIGURES

FIGURE NO CAPTION PAGENO

1 Designing Flow Diagram 17

2 Laplacian Images 28

3 Reduced Representation 29
of Laplacian Images.

4 Eigen Value of LPP 29

5 Images in Yale Database 30

6 Original and Cropped Image 31


Image in Yale Database.

7 Error Rate Versus 32


Dimensionality Reduction.

8 Face Images in PIE Database 33

9 Face Images in MISRA Database 34


LIST OF ABBREVIATIONS

PCA : Principle Component Analysis

LDA : Linear Discriminant Analysis

LLP : Locality Preserving Projections

FRR : False Rejection Rate

FAR : False Acceptance Rate


CHAPTER-1
INTRODUCTION
A smart environment is one that is able to identify people, interpret their actions,
and react appropriately. Thus, one of the most important building blocks of smart environments
is a person identification system. Face recognition devices are ideal for such systems, since they
have recently become fast, cheap, unobtrusive, and, when combined with voice-recognition, are
very robust against changes in the environment. Moreover, since humans primarily recognize
each other by their faces and voices, they feel comfortable interacting with an environment that
does the same.

Facial recognition systems are built on computer programs that analyze images of
human faces for the purpose of identifying them. The programs take a facial image, measure
characteristics such as the distance between the eyes, the length of the nose, and the angle of the
jaw, and create a unique file called a "template." Using templates, the software then compares
that image with another image and produces a score that measures how similar the images are to
each other. Typical sources of images for use in facial recognition include video camera signals
and pre-existing photos such as those in driver's license databases.

Facial recognition systems are computer-based security systems that are able to
automatically detect and identify human faces. These systems depend on a recognition algorithm,
such as eigenface or the hidden Markov model. The first step for a facial recognition system is to
recognize a human face and extract it for the rest of the scene. Next, the system measures nodal
points on the face, such as the distance between the eyes, the shape of the cheekbones and other
distinguishable features.

These nodal points are then compared to the nodal points computed from a
database of pictures in order to find a match. Obviously, such a system is limited based on the
angle of the face captured and the lighting conditions present. New technologies are currently in
development to create three-dimensional models of a person's face based on a digital photograph
in order to create more nodal points for comparison. However, such technology is

1
inherently susceptible to error given that the computer is extrapolating a three-dimensional
model from a two-dimensional photograph.

Principle Component Analysis is an eigenvector method designed to model linear


variation in high-dimensional data. PCA performs dimensionality reduction by projecting the
original n-dimensional data onto the k << n -dimensional linear subspace spanned by the leading
eigenvectors of the data’s covariance matrix. Its goal is to find a set of mutually orthogonal basis
functions that capture the directions of maximum variance in the data and for which the
coefficients are pair wise decorrelated. For linearly embedded manifolds, PCA is guaranteed to
discover the dimensionality of the manifold and produces a compact representation.

Facial Recognition Applications:

Facial recognition is deployed in large-scale citizen identification applications,


surveillance applications, law enforcement applications such as booking stations, and kiosks.

2
1.1 Problem Definition

Facial recognition systems are computer-based security systems that are able to
automatically detect and identify human faces. These systems depend on a recognition algorithm.
But the most of the algorithm considers some what global data patterns while recognition
process. This will not yield accurate recognition system. So we propose a face recognition
system which can able to recognition with maximum accuracy as possible.

1.2 System Environment

The front end is designed and executed with the J2SDK1.4.0 handling the core
java part with User interface Swing component. Java is robust , object oriented , multi-threaded ,
distributed , secure and platform independent language. It has wide variety of package to
implement our requirement and number of classes and methods can be utilized for programming
purpose. These features make the programmer’s to implement to require concept and algorithm
very easier way in Java.

The features of Java as follows:

Core java contains the concepts like Exception handling, Multithreading; Streams can be
well utilized in the project environment.

The Exception handling can be done with predefined exception and has provision
for writing custom exception for our application. Garbage collection is done automatically, so
that it is very secure in memory management.

The user interface can be done with the Abstract Window tool KitAnd also Swing
class. This has variety of classes for components and containers. We can make instance of these
classes and this instances denotes particular object that can be utilized in our program.

3
Event handling can be performed with Delegate Event model. The objects are assigned to the Listener
that observe for event, when the event takes place the corresponding methods to handle that event will
be called by Listener which is in the form of interfaces and executed.

This application makes use of Action Listener interface and the event click event
gets handled by this. The separate method actionPerformed() method contains details about the
response of event.

Java also contains concepts like Remote method invocation; Networking can be
useful in distributed environment.

CHAPTER-2
SYSYTEM ANALYSIS

2.1 Existing System:


Many face recognition techniques have been developed over the past few decades.
One of the most successful and well-studied techniques to face recognition is the appearance-
based method. When using appearance-based methods, we usually represent an image of

4
size n *m pixels by a vector in an n *m-dimensional space. In practice, however, these n*m
dimensional spaces are too large to allow robust and fast face recognition. A common way to
attempt to resolve this problem is to use dimensionality reduction techniques.

Two of the most popular techniques for this purpose are,


2.1.1 Principal Component Analysis (PCA).
2.1.2 Linear Discriminant Analysis (LDA).

2.1.1 Principal Component Analysis (PCA):


The purpose of PCA is to reduce the large dimensionality of the data space
(observed variables) to the smaller intrinsic dimensionality of feature space (independent
variables), which are needed to describe the data economically. This is the case when there is a
strong correlation between observed variables. The jobs which PCA can do are prediction,
redundancy removal, feature extraction, data compression, etc. Because PCA is a known
powerful technique which can do something in the linear domain, applications having linear
models are suitable, such as signal processing, image processing, system and control theory,
communications, etc.

The main idea of using PCA for face recognition is to express the large 1-D vector of
pixels constructed from 2-D face image into the compact principal components of the feature
space. This is called eigenspace projection. Eigenspace is calculated by identifying the
eigenvectors of the covariance matrix derived from a set of fingerprint images (vectors).

2.1.2 Linear Discriminant Analysis (LDA):

LDA is a supervised learning algorithm. LDA searches for the project axes on
which the data points of different classes are far from each other while requiring data points of
the same class to be close to each other. Unlike PCA which encodes information in an
orthogonal linear space, LDA encodes discriminating information in a linearly separable space

5
using bases that are not necessarily orthogonal. It is generally believed that algorithms based on
LDA are superior to those based on PCA.

But the most of the algorithm considers some what global data patterns while recognition
process. This will not yield accurate recognition system.

 Less accurate

 Does not deal with manifold structure

 It doest not deal with biometric characteristics.

2.2 Proposed System:

PCA and LDA aim to preserve the global structure. However, in many real-
world applications, the local structure is more important. In this section, we describe Locality
Preserving Projection (LPP), a new algorithm for learning a locality preserving subspace.

The objective function of LPP is as follows,

The manifold structure is modeled by a nearest-neighbor graph which preserves the


local structure of the image space. A face subspace is obtained by Locality Preserving
Projections (LPP).Each face image in the image space is mapped to a low-dimensional face
subspace, which is characterized by a set of feature images, called Laplacianfaces. The face
subspace preserves local structure and seems to have more discriminating power than the PCA
approach for classification purpose. We also provide

Theoretical analysis to show that PCA, LDA, and LPP can be obtained from different
graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a
projection that respects this graph structure. In our the theoretical analysis, we show how

6
PCA, LDA, and LPP arise from the same principle applied to different choices of this graph
structure.

It is worth while to highlight several aspects of the proposed approach here:

1. While the Eigenfaces method aims to preserve the global structure of the image space,
and the Fisher faces method aims to preserve the discriminating information .Our Laplacianfaces
method aims to preserve the local structure of the image space which real -world application
mostly needs.

2. An efficient subspace learning algorithm for face recognition should be able to


discover the nonlinear manifold structure of the face space. Our proposed Laplacianfaces method
explicitly considers the manifold structure which is modeled by an adjacency graph and they
reflect the intrinsic face manifold structures.

3. LPP shares some similar properties to LLE . LPP is linear, while LLE is
nonlinear. Moreover, LPP is defined everywhere, while LLE is defined only on the training data
points and it is unclear how to evaluate the maps for new test points. In contrast, LPP may be
simply applied to any new data point to locate it in.

The algorithmic procedure of Laplacianfaces is formally stated below:

1. PCA projection.
We project the image set into the PCA subspace by throwing away the smallest
principal components. In our experiments, we kept 98 percent information in the sense of
reconstruction error. For the sake of simplicity, we still use x to denote the images in the
PCA subspace in the following steps. We denote by WPCA the transformation matrix of PCA.

7
2. Constructing the nearest-neighbor graph.
Let G denote a graph with n nodes. The ith node corresponds to the face image xi .
We put an edge between nodes i and j if xi and xi are “close,” i.e., xi is among k nearest
neighbors of xi, or xi is among k nearest neighbors of xj. The constructed nearest neighbor graph
is an approximation of the local manifold structure. Note that here we do not use the
neighborhood to construct the graph. This is simply because it is often difficult to choose the
optimal " in the real-world applications, while k nearest-neighbor graph can be constructed more
stably. The disadvantage is that the k nearest-neighbor search will increase the computational
complexity of our algorithm. When the computational complexity is a major concern, one can
switch to the "-neighborhood.

3. Choosing the weights. If node i and j are connected, put

where t is a suitable constant. Otherwise, put Sij = 0.The weight matrix S of graph G models
the face manifold structure by preserving local structure. The justification for this choice of
weights can be traced.

where D is a diagonal matrix whose entries are column (or row, since S is symmetric) sums of
S, Dii = ∑j Sji. L =D - S is the Laplacian matrix. The
ith row of matrix X is xi.
These eigenvalues are equal to or greater than zero because the matrices XLXT
and XDXT are both symmetric and positive semi definite. Thus, the embedding is
as follows:

8
where y is a k-dimensional vector. W is the transformation matrix. This linear mapping best
preserves the manifold’s estimated intrinsic geometry in a linear sense. The column vectors of W
are the so-called Laplacianfaces.
This principle is implemented with unsupervised learning concept with training and test
data.

The system must require to implement Principle Component Analysis to reduce


image in the dimension less than n and co-variance of the data.
The system must be used in Unsupervised learning algorithm . So it must be trained properly
with relevant data sets. Based on this training , input data is tested by the application and result is
displayed to the user.

2.3 System Requirement

Hardware specifications:
Processor : Intel Processor IV
RAM : 128 MB
Hard disk : 20 GB
CD drive : 40 x Samsung
Floppy drive : 1.44 MB
Monitor : 15’ Samtron color

9
Keyboard : 108 mercury keyboard
Mouse : Logitech mouse

Software Specification:
Operating System – Windows XP/2000
Language used – J2sdk1.4.0

2.4 System Analysis Methods


System analysis can be defined, as a method that is determined to use the
resources, machine in the best manner and perform tasks to meet the information needs of an
organization. It is also a management technique that helps us in designing a new systems or
improving an existing system. The four basic elements in the system analysis are
 Output
 Input
 Files
 Process
The above-mentioned are mentioned are the four basis of the System Analysis.

10
2.5 Feasibility Study
Feasibility is the study of whether or not the project is worth doing. The process
that follows this determination is called a Feasibility Study. This study is taken in right time
constraints and normally culminates in a written and oral feasibility report. This feasibility study
is categorized into seven different types. They are
 Technical Analysis
 Economical Analysis
 Performance Analysis
 Control and Security Analysis
 Efficiency Analysis
 Service Analysis

2.5.1 Technical Analysis

This analysis is concerned with specifying the software that will successfully
satisfy the user requirements. The technical needs of a system are to have the facility to produce
the outputs in a given time and the response time under certain conditions..

2.5.2 Economic Analysis

Economic Analysis is the most frequently used technique for evaluating the
effectiveness of prepared system. This is called Cost/Benefit analysis. It is used to determine the
benefits and savings that are expected from a proposed system and compare them with costs. If
the benefits overweigh the cost, then the decision is taken to the design phase and implements
the system.

2.5.3 Performance Analysis

The analysis on the performance of a system is also a very important analysis.


This analysis analyses about the performance of the system both before and after the proposed
system. If the analysis proves to be satisfying from the company’s side then this analysis result is

11
moved to the next analysis phase. Performance analysis is nothing but invoking at program
execution to pinpoint where bottle necks or other performance problems such as memory leaks
might occur. If the problem is spotted out then it can be rectified.

2.5.4 Efficiency Analysis

This analysis mainly deals with the efficiency of the system based on this project.
The resources required by the program to perform a particular function are analyzed in this
phase. It is also checks how efficient the project is on the system, in spite of any changes in the
system. The efficiency of the system should be analyzed in such a way that the user should not
feel any difference in the way of working. Besides, it should be taken into consideration that the
project on the system should last for a longer time.

12
CHAPTER-3
SYSTEM DESIGN

Design is concerned with identifying software components specifying relationships


among components. Specifying software structure and providing blue print for the document
phase.

Modularity is one of the desirable properties of large systems. It implies that the
system is divided into several parts. In such a manner, the interaction between parts is minimal
clearly specified.

Design will explain software components in detail. This will help the
implementation of the system. Moreover, this will guide the further changes in the system to
satisfy the future requirements.

3.1 Project modules:

3.1.1 Read/Write Module:

Here, the basic operations for loading and saving input and resultant images
respectively from the algorithms. The image files are read, processed and new images are written
into the output images.

13
3.1.2 Resizing Module:

Here, the faces are converted into equal size using linearity algorithm, for the
calculation and comparison. In this module large images or smaller images are converted into
standard sizing.

3.1.3 Image Manipulation:

Here, the face recognition algorithm using Locality Preserving Projections (LPP)
is developed for various enrolled into the database.

3.1.4 Testing Module:

Here, the Input images are resized then compared with the Intermediate image
and find the tested image then again compared with the laplacian faces to find the aureate faces.

FIGURE :1

14
Designing Flow Diagram

3.2 System Development

15
This system is developed to implement Principle component analysis. Image
manipulation: This module designed to view all the faces that are considered in our training case.
Principle Component Analysis is an eigenvector method designed to model linear
variation in high-dimensional data. PCA performs dimensionality reduction by projecting the
original n-dimensional data onto the k << n -dimensional linear subspace spanned by the leading
eigenvectors of the data’s covariance matrix. Its goal is to find a set of mutually orthogonal basis
functions that capture the directions of maximum variance in the data and for which the
coefficients are pair wise decorrelated. For linearly embedded manifolds, PCA is guaranteed to
discover the dimensionality of the manifold and produces a compact representation.

1)Training module:
Unsupervised learning - this is learning from observation and discovery. The data
mining system is supplied with objects but no classes are defined so it has to observe the
examples and recognize patterns (i.e. class description) by itself. This process requires training
data set .This system provides training set as 17 faces and each contains three different poses of
faces. It undergoes iterative process stores require detail in face Template two dimension array.

2) Test module:
After training process is over , it process the input image face for eigenface
process then can able to say whether it recognizes or not.

CHAPTER-4
IMPLEMENTATION

Implementation includes all those activities that take place to convert from the old
system to the new. The new system may be totally new, replacing an existing system or it may be
major modification to the system currently put into use.

16
This system “Face Recognition” is a new system. Implementation as a whole
involves all those tasks that we do for successfully replacing the existing or introduce new
software to satisfy the requirement.

The entire work can be described as retrieval of faces from database, processed
for eigen faces training method and test case are executed and finally result is displayed to the
user.

The test case has performed in all aspect and the system has given correct result in
all the cases.

4.1. Implementation Details:


4.1.1 Form design
Form is a tool with a message; it is the physical carrier of data or information. It
also can constitute authority for actions. In the form design files are used to do each module. The
following are list of forms used in this project:

1) Main Form
Contains option for viewing face from data base. The system retrieves the images
stored in the folder called train and test folder, which is available in bin folder of your
application.

2) View database Form:


This form retrieves face available in the train folder. It is just for viewing purpose
for the user.

3) Recognition Form :

17
This form provides option for loading input image from test folder. Then user has
to click Train button which leads the application for training to gain knowledge as it is of the
form unsupervised learning algorithm.

Unsupervised learning - This is learning from observation and discovery. The data mining
system is supplied with objects but no classes are defined so it has to observe the examples and
recognize patterns (i.e. class description) by itself. This system results in a set of class
descriptions, one for each class discovered in the environment. Again this is similar to cluster
analysis as in statistics.
Then user can click the test button Test button to see the matching for the faces.
The matched face will be displayed in the place provided for matched face option. In case of any
difference the information will be displayed in place provided in the form.

4.1.2 Input design


Accurate input data is the most common case of errors in data processing. Errors
entered by data entry operators can control by input design. Input design is the process of
converting user-originated inputs to a computer-based format. Input data are collected and
organized into group of similar data.

4.1.3 Menu Design


The menu in this application is organized into mdiform that organizes
viewing of image files from folder. Also it has option for loading image as input , try to perform
training method and test whether it recognizes the face or not.

4.1.4 Data base design:


A database is a collection of related data. The database has following properties:

18
i) Database reflects the changes of the information.
ii)A database is logically coherent collection of data with some inherent
meaning.
This application takes the images form the default folder set for this application
train and test folders. The file extension is .jpeg option.

4.1.5 Code Design


o Face Enrollment
 -a new face can be added by the user into facespace database
O Face Verification
 -verifies a persons face in the database with reference to his/her identity.
o Face Recognition
 -compares a persons face with all the images in database and choose the closest
match. Here Principle Component Analysis is performed with training data set . The result is
performed from test data set.
o Face Retrieval
 -displays all the faces and its templates in the database
o Statistics
 -stores a list of recognition accuracy for analyzing the FRR (False
Rejection Rate) and FAR (False Acceptance Rate)

19
4.2 Coding:

import java.lang.*;
import java.io.*;

public class PGM_ImageFilter


{
//constructor
public PGM_ImageFilter()
{
inFilePath="";
outFilePath="";
}

//get functions
public String get_inFilePath()
{
return(inFilePath);
}

public String get_outFilePath()


{
return(outFilePath);
}

//set functions
public void set_inFilePath(String tFilePath)
{
inFilePath=tFilePath;
}

public void set_outFilePath(String tFilePath)


{
outFilePath=tFilePath;
}

//methods
public void resize(int wout,int hout)
{
PGM imgin=new PGM();
PGM imgout=new PGM();

20
if(printStatus==true)
{
System.out.print("\nResizing...");
}
int r,c,inval,outval;

//read input image


imgin.setFilePath(inFilePath);
imgin.readImage();

//set output-image header


imgout.setFilePath(outFilePath);
imgout.setType("P5");
imgout.setComment("#resized image");
imgout.setDimension(wout,hout);
imgout.setMaxGray(imgin.getMaxGray());

//resize algorithm (linear)


double win,hin;
int xi,ci,yi,ri;

win=imgin.getCols();
hin=imgin.getRows();

for(r=0;r<imgout.getRows();r++)
{
for(c=0;c<imgout.getCols();c++)
{
xi=c;
yi=r;

ci=(int)(xi*((double)win/(double)wout));
ri=(int)(yi*((double)hin/(double)hout));

inval=imgin.getPixel(ri,ci);
outval=inval;

imgout.setPixel(yi,xi,outval);
}
}

if(printStatus==true)
{
System.out.println("done.");
}

21
//write output image
imgout.writeImage();
}

CHAPTER-5
SYSTEM TESTING

5.1 Software Testing


Software Testing is the process of confirming the functionality and correctness of
software by running it. Software testing is usually performed for one of two reasons:

i) Defect detection
ii)Reliability estimation.

Software Testing contains two types of testing. They are


1) White Box Testing
2) Block Box Testing

1) White Box Testing

White box testing is concerned only with testing the software product, it cannot
guarantee that the complete specification has been implemented. White box testing is testing
against the implementation and will discover faults of commission, indicating that part of the
implementation is faulty.

2) Block Box Testing

22
Black box testing is concerned only with testing the specification, it cannot
guarantee that all parts of the implementation have been tested. Thus black box testing is testing
against the specification and will discover faults of omission, indicating that part of the
specification has not been fulfilled.

Functional testing is a testing process that is black box in nature. It is aimed at


examine the overall functionality of the product. It usually includes testing of all the interfaces
and should therefore involve the clients in the process.

The key to software testing is trying to find the myriad of failure modes –
something that requires exhaustively testing the code on all possible inputs. For most programs,
this is computationally infeasible. It is common place to attempt to test as many of the syntactic
features of the code as possible (within some set of resource constraints) are called white box
software testing technique. Techniques that do not consider the code’s structure when test cases
are selected are called black box technique.

In order to fully test a software product both black and white box testing are
required.The problem of applying software testing to defect detection is that software can only
suggest the presence of flaws, not their absence (unless the testing is exhaustive). The problem of
applying software testing to reliability estimation is that the input distribution used for selecting
test cases may be flawed. In both of these cases, the mechanism used to determine whether
program output is correct is often impossible to develop. Obviously the benefit of the entire
software testing process is highly dependent on many different pieces. If any of these parts is
faulty, the entire process is compromised.

Software is now unique unlike other physical processes where inputs are received
and outputs are produced. Where software differs is in the manner in which it fails. Most
physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail
in many bizarre ways. Detecting all of the different failure modes for software is generally
infeasible.

23
Final stage of the testing process should be System Testing. This type of test
involves examination of the whole computer system, all the software components, all the hard
ware components and any interfaces. The whole computer based system is checked not only for
validity but also to meet the objectives.

5.2 Efficiency of Laplacian Algorithm

Now, consider a simple example of image variability. Imagine that a set of face
images are generated while the human face rotates slowly. Thus, we can say that the set of face
images are intrinsically one dimensional.
Many recent works shows that the face images do reside on a low
dimensional(image space).Therefore, an effective subspace learning algorithm should be able to
detect the nonlinear manifold structure. PCA and LDA, effectively see only the Euclidean
structure; thus, they fail to detect the intrinsic low-dimensionality. With its neighborhood
preserving character, the Laplacian faces capture the intrinsic face manifold structure .

FIGURE:2

Two-dimensional linear embedding of face images by Laplacianfaces

Fig. 1 shows an example that the face images with various pose and expression of
a person are mapped into two-dimensional subspace. This data set contains face images. The size
of each image is 20 _ 28 pixels, with256 gray-levels per pixel. Thus, each face image is

24
represented by a point in the 560-dimensional ambientspace. However, these images are believed
to come from a sub manifold with few degrees of freedom.

The face images are mapped into a two-dimensional space with continuous change
in pose and expression. The representative face images are shown in the different parts of the
space. The face images are divided into two parts. The left part includes the face images with
open mouth, and the right part includes the face images with closed mouth. This is because in
trying to preserve local structure .. Specifically, it makes the neighboring points in the image face
nearer in the face space. . The 10 testing samples can be simply located in the reduced
representation space by the Laplacian faces (columnvectors of the matrix W).

FIGURE:3

Fig. 2. Distribution of the 10 testing samples in the reduced representation subspace. As can be
seen, these testing samples optimally find their coordinates which reflect their intrinsic
properties, i.e., pose and expression.

As can be seen, these testing samples optimally find their coordinates which reflect their
intrinsic properties, i.e., pose and expression. This observation tells us that the Laplacianfaces are
capable of capturing the intrinsic face manifold structure.

FIGURE:4

25
The eigenvalues of LPP and LaplacianEigenmap.

Fig. 3 shows the eigen values computed by the two methods. As can be seen,
the eigen values of LPP is consistently greater than those of Laplacian Eigenmaps.

5.2. 1 Experimental Results


A face image can be represented as a point in image space. However, due to the
unwanted variations resulting from changes in lighting, facial expression, and pose, the image
space might not be an optimal space for visual representation.
We can display the eigenvectors as images. These images may be called
Laplacianfaces. Using the Yale face database as the training set, we present the first 10
Laplacianfaces in Fig. 4, together with Eigen faces and Fisher faces. A face image can be
mapped into the locality preserving subspace by using the Laplacian faces.

FIGURE:5

26
Fig (a) Eigenfaces, (b) Fisher faces, and (c) Laplacianfaces calculated from the face
images in the YALE database.

5.2.2 Face Recognition Using Laplacianfaces


In this section, we investigate the performance of our proposed Laplacianfaces
method for face recognition. The system performance is compared with the Eigen faces method
and the Fisher faces method.

In this study, three face databases were tested. The first one is the PIE (pose,
illumination, and expression) .The second one is the Yale database and the
Third one is the MSRA database.

In short, the recognition process has three steps. First, we calculate the Laplacianfaces
from the training set of face images; then the new face image to be identified is projected into the
face subspace spanned by the Laplacianfaces; finally, the new face image is identified by a
nearest neighbor classifier.

FIGURE:6

Fig 5.The original face image and the cropped image

5.2.3 Yale Database


The Yale face database was constructed at the Yale Center for Computational
Vision and Control. It contains 165 grayscale images of 15 individuals. The images demonstrate

27
variations in lighting condition (left-light, center-light, right light),facial expression (normal,
happy, sad, sleepy, surprised, and wink), and with/without glasses.

A random subset with six images was taken for the training set. The rest was taken for
testing. The testing samples were then projected into the low-dimensional Representation.
Recognition was performed using a nearest-neighbor classifier.

In general, the performance of the Eigen faces method and the Laplacian faces
method varies with the number of dimensions. We show the best results obtained by Fisher
faces, Eigen faces, and Laplacian faces. The recognition results are shown in Table 1. It is found
that the Laplacian faces method significantly outperforms both Eigen faces and Fisher faces
methods.

TABLE :1

Performance Comparison on the Yale Database

FIGURE:7

28
Fig. 6 shows the plots of error rate versus dimensionality reduction.

5.2.4 PIE Database


Fig. 7 shows some of the faces with pose, illumination and expression variations
in the PIE database. Table 2 shows the recognition results. As can be seen Fisher faces performs
comparably to our algorithm on this Database, while Eigenfaces performs poorly. The error rate
for Laplacian faces, Fisher faces, and Eigen faces .As can be seen, the error rate of our
Laplacianfaces method decreases fast as the dimensionality of the face subspace.

FIGURE:8

Fig. 7. The sample cropped face images of one individual from PIE database. The original face
images are taken under varying pose, illumination, and expression.

TABLE :2

Performance Comparison on the PIE Database

29
5.2.5 MSRA Database

This database was collected at Microsoft Research Asia. Sixty-four to eighty


face images were collected for each individual in each session. All the faces are frontal. Fig. 9
shows the sample cropped face images from this database. In this test, one session was used for
training and the other was used for testing.

FIGURE:9

Fig.8 The sample cropped face images of one individual from MISRA database. The original
face images are taken
under varying pose, illumination, and expression.

TABLE :3

30
shows the recognition results. Laplacian faces method has lower error rate than those of Eigen
faces and fisher faces .

Performance Comparison on the MSRA Database

Performance comparison on the MSRA database with different number of


training samples

31
CHPTER-6
CONCLUSION

Our system is proposed to use Locality Preserving Projection in Face Recognition


which eliminates the flaws in the existing system. This system makes the faces to reduce into
lower dimensions and algorithm for LPP is performed for recognition. The application is
developed successfully and implemented as mentioned above.

This system seems to be working fine and successfully. This system can able to
provide the proper training set of data and test input for recognition. The face matched or not is
given in the form of picture image if matched and text message in case of any difference.

32
SNOPSHOT

OUTPUT PAGE:1

33
OUTPAGE:2

34
OUTPUT PAGE:3

35
OUTPUTPAGE:4

36
OUTPUT PAGE:5

37
OUTPUT PAGE:6

38
OUTPUT PAGE:7

REFERENCES

39
1. X. He and P. Niyogi, “Locality Preserving Projections,” Proc. Conf.
Advances in Neural Information Processing Systems, 2003.

2. A.U. Batur and M.H. Hayes, “Linear Subspace for Illumination


Robust Face Recognition,” (dec2001).

3. M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral


Techniques for Embedding and Clustering,”

4. P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces


Vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Pattern Analysis and Machine Intelligence (July 1997).

5. M. Belkin and P. Niyogi, “Using Manifold Structure for Partially


Labeled Classification (2002).

6. M. Brand, “Charting a Manifold,” Proc. Conf. Advances in Neural


Information Processing Systems, 2002.

7. F.R.K. Chung, “Spectral Graph Theory,” Proc. Regional Conf. Series


in Math., no. 92, 1997.

8. Y. Chang, C. Hu, and M. Turk, “Manifold of Facial Expression,”


Proc. IEEE Int’l Workshop Analysis and Modeling of Faces and
Gestures, Oct. 2003.

9. R. Gross, J. Shi, and J. Cohn, “Where to Go with Face


Recognition,” Proc. Third Workshop Empirical Evaluation Methods
in Computer Vision, Dec. 2001.

10. A.M. Martinez and A.C. Kak, “PCA versus LDA,” IEEE Trans.
Pattern Analysis and Machine Intelligence, Feb. 2001.

40

You might also like