You are on page 1of 21

A PCA-based feature

extraction method for


Face Recognition
Face recognition introduction
 Given an image or a
sequence of images of a
scene, identify or
authenticate one or more
people in the scene
 It sounds simple
 But it turns out being a
rather challenging task:
◦ Automatically locate the face
◦ Recognize the face from a general
view point under different illumination
conditions, facial expressions , facial
accessories, aging effects, etc.
Cont..
It involves recognizing people by there:
•Facial features
•Face geometry

Principle:
Analysis of unique shape, pattern and
positioning of facial features.
General Face Recognition
Steps

Face Detection
Face Normalization
Face Identification
Recognition v/s Verification
Identification (1:N)

Biometric Biometric
reader Matcher
Database
This person is
Emily

I am Jigar
Verification (1:1)
ID

Biometric Biometric Database


reader Matcher

Match
Difficulties with conventional
PCA
Global projection suppresses local
information, and it is not resilient to face
illumination condition and facial
expression variations
It does not take discriminative task into
account
◦ ideally, we wish to compute features that allow
good discrimination
◦ not the same as largest variance
Cont..
To confine illumination conditions, facial
expressions variations to local areas
◦ Divide a face image into several sub-images, and
carry out PCA computation on each local area
independently

To emphasize different parts of human face


have different discrimination capabilities
◦ Adaptively compute the contribution factor of each
local area, and incorporate contribution factor into
final classification decision
Eigenfaces
 Modeling
1. Given a collection of n labeled training images,
2. Compute mean image and covariance matrix.
3. Compute k Eigenvectors (note that these are
images) of covariance matrix corresponding to
k largest Eigenvalues.
4. Project the training images to the k-
dimensional Eigenspace.
 Recognition
1. Given a test image, project to Eigenspace.
2. Perform classification to the projected training
images.
Eigen faces
PCA seeks directions that are efficient
for representing the data
PCA reduces the dimension of the data
Speeds up the computational time

ient nt
ic i e
t ef f eff
ic
no

Class A Class A

Class B Class B
Eigenfaces, the algorithm
Original Images

 a1   b1   h1 
     
 a2  b2   h2 
   …… 
     
     
 aN 2   bN 2   hN 2 
Eigenfaces, the algorithm
The mean face can
Mean-Face
be computed as:

 a1  b1    h1 
 
 1  a2  b2    h2 
m , where M  8
M   
 
 aN 2  bN 2    hN 2 
Eigenfaces, the algorithm
Then subtract it from the training faces
 a1  m1   b1  m1   c1  m1   d1  m1 
         
  2 a  m2   b  m2    2 c  m2  d  m
, dm  
2 2 2 
am  , bm  , cm  ,
           
       
 a N 2  mN 2   bN 2  mN 2   c N 2  mN 2   d N 2  mN 2 

 e1  m1   f1  m1   g1  m1   h1  m1 
    f 2  m2    g 2  m2    h2  m2 
e  m
em  
 2 2 
, fm  , gm  , hm 
            
       
 eN 2  mN 2   f N 2  mN 2   g N 2  mN 2   hN 2  mN 2 
Eigenfaces, the algorithm
Now we build the matrix which is N2 by M
       
A   am bm cm d m em f m g m hm 

The covariance matrix which is N2 by N2


Cov  AA
Eigenfaces, the algorithm
Find eigenvalues of the covariance matrix
◦ The matrix is very large
◦ The computational effort is very big

We are interested in at most M (?)


eigenvalues
◦ And in the given problem we have taken 99 most
significant Eigen directions to represents the data.
Example: contribution factor?

Observation: “Eyes are the window of the soul.” Some parts of human face
are more important than other parts to successful face recognition.
Contribution factor wants to give the value of this kind of “importance”

* The size of blue mask is the same in all 3 images


PCA Algorithm

Step 1: Partition face images into sub-patterns

* Face images are from Yale face database


PCA Algorithm
 Step2: Compute the expected contribution of
each sub-pattern
◦ Generate the Mean and Median faces for each
person, and use these “virtual faces” as the probe
set in training
◦ Use the raw face-image sub-patterns as the gallery
set in for training, and compute the PCA’s projection
matrix on these gallery set
◦ For each sample in the probe set, compute its
similarity to the samples in corresponding gallery set
PCA Algorithm
◦ If a sample from a sub-pattern’s probe set is
correctly classified, the contribution of this sub-
pattern is added by 1

Face images from AR face database, and the


computed contribution matrix
PCA Algorithm
 Step 3: Classification
When an unknown face image comes in
• partition it into sub-patterns
• classify the unknown sample’s identity in
each sub-pattern
• Incorporate the expected contribution and
the classification result of all sub-patterns to
generate the final classification result
Difficulties with PCA
Projection may suppress important detail
◦ smallest variance directions may not be
unimportant
Method does not take discriminative task
into account
◦ typically, we wish to compute features that
allow good discrimination
◦ not the same as largest variance
Conclusion
PCA has been proved an effective approach for
image recognition, segmentation and other kind of
analysis.
•Advantages:

Extract uncorrelated basic feature sets to describe the


properties of a data set.
Reduce the dimensionality of the original image space.

Disadvantages:
Provide little quantitative information and visualization
implications
No way to discriminate between variance due to object of
interest and variance due to noise or background.


Researchers proposed different schemes and make many
improvements for better applying PCA in image analysis.

You might also like