You are on page 1of 14

PROJECT REPORT

Digital Signal Processing

HITEC UNIVERSITY

Heavy Industries Taxila Education City

Department of Electrical Engineering

6th SEMESTER

SECTION: D Group No:12

GROUP MEMBERS:
Name: Registration No.
1. Kamran Ahmad 15-EE-155
2. Malik Ali Hassan 15-EE-031
3. Mohsin Khan 15-EE-171
Real time Face
Detection and
Recognition

2
Abstract:
Face detection is an automated process being used in a variety of
applications that identifies human faces in digital images. Face detection
also refers to the psychological process by which humans locate and attend
to faces in a visual scene.Face-detection algorithms focus on the detection of
frontal human faces. It is analogous to image detection in which the image
of a person is matched bit by bit. Image matches with the image stores in
database. Any facial feature changes in the database will invalidate the
matching process.

3
Table of Contents

3. 1. Project Description .................................................................. 5


4. 1.1 .................................................................................................................................. Goal
5
1.2 Platform/software ............................................................................................. 5

5. 2. Design ...................................................................................... 5
2.1 The structure for this project ................................................................................. 5

2.2 Why the 3 components are necessary .................................................................. 6

2.3 Face Detection .................................................................................................. 6

2.4 Face Recognition .............................................................................................. 9

2.5 Result of Recognition…………………………………………………… ….11

6. 3. Result and analysis ................................................................ 12


3.1 Result of recognition ...................................................................................... 12

7. 4. Remark and future work ........................................................ 13


4.1 Remark for the final project............................................................................ 13

4.2 Future work..................................................................................................... 13

4
1. Project Description

1.1. Goal
Our goal is to find the fundamentals for face recognition. We want to find a way to

build a real-time face recognition system. We want to combine what we have learned in

this class and come up with a more complex application.

1.2. Platform/software
The chosen programming platform is Matlab2017a. Facilitate capture of streaming

video using the MATLAB Webcam interface. Which involve the installation of MATLAB

Support Package for USB Webcams, using the Support Package Installer.

Note that USB webcams are supported in core MATLAB. Other cameras are supported

via interfaces in the Image Acquisition Toolbox.

2. Design

2.1. The structure for this project


In order to make the system work in real time, we need to have 3 components. The

three components are to read image from webcam, face detection,and then lastly face

recognition. Those components will be described in detail in the next few sub topics, but

we need to figure out why we need those three first.

5
Detect
Read Imge Recognize
Existance of
File through Detected
Face in
Webcam Face
Image

2.2. Why the 3 components are necessary


First thing is that our system would be able to read the image from the webcam for

the further process of detection .For face detection, it is quite obvious, because if you

cannot detect a face, how can you do recognition on it? (Usingproof by contradiction).

The only problem here is what way is more accurate on finding face. For face detection,

it is required because we are doing real time analysis.

For face recognition, it is required because the goal of the project is recognizing the face.

By comparing the key elements of the face to the faces stored in the database, we will be

able to recognize one person, and the difficulty here is still, which method should we

choose and how can we make it a real time analysis.

2.3. Face detection

2.3.1. Algorithm description


For this project, we used vision.CascadeObjectDetector System object in vision
toolbox of Matlab 2017a to detect people’s face , we use an approach we call
“aggregated features from stacked images.” it’s just an approach that occurred to
us—one that yields very good results fast with relatively few images, and that is
easy to implement.

6
2.3.2. Important code details

a) For each face in the training set, create a stacked-image montage of 5*


randomly images randomly selected from the images of that person. (Notice
that by default we captured 8 images of each, but are training on only 5.
(*This number, too, is user-configurable.) That simplifies the situation if some
of the images were discarded for poor quality in step 6.) A stacked-image
montage might look like this:

7
Instantiate a feature detector, and detect, extract, and store features from each
montage. we tried many different detectors and extractors, with many different input
parameters, before settling on FAST features, and a SURF extractor:

clc;
cam=webcam(1);
load classifier;
faceDetector = vision.CascadeObjectDetector();
Run=1;
while(Run)
img = snapshot(cam);
img=imresize(img,[480 640]);
bbox = step(faceDetector,img);
if(~isempty(bbox))
for i=1:size(bbox,1)
crop=imcrop(img,bbox(i,:));
crop=rgb2gray(crop);
crop=histeq(crop);
crop = medfilt2(crop);
sizeNormalizedImage = imresize(crop,[150 150]);
testimage=extractHOGFeatures(sizeNormalizedImage);
[prediction ,scores] = predict(faceClassifier,testimage);
disp(scores);
img = insertObjectAnnotation(img,'rectangle',bbox(i,:),prediction);
end
end
figure(1),imshow(img);
if get(gcf, 'CurrentCharacter')
disp('Key press')

8
key = get(gcf,'CurrentCharacter');
if(key=='q')
Run=0;
end
end
end

We now have a set of features (sceneFeatures) calculated for each face. This set of features can now be

used as a classifier! (If all went well, the feature extraction should have taken only a few seconds!)

2.4. Face recognition

2.4.1. Algorithm description


Restart the streaming image capture, and detect faces. Crop each detected face
(again, using the bounding box returned by the face detector), and detect and
extract features in each. (The same preprocessing steps describe above are
implemented here). Match the extracted feature set to each of the feature sets
extracted from the training-image montages, and find which training-set image it
matches most closely. That one is the “best guess prediction.” Extract the label
corresponding from that closest-match from name of the parent directory for the
training images. One last thing to note, we used face detection in the first step to
help create our database in real time.

2.4.2. Code description

%%Loading Training Data


disp('Loading Training Data');
train_dir = dir('dataset/');
temp=1;
for i=1:size(train_dir,1)
temp_name=train_dir(i,1).name;
if(~strcmp(temp_name,'.')&&~strcmp(temp_name,'..'))

9
temp_img=imread(temp_name);
temp_img=rgb2gray(temp_img);
temp_img=histeq(temp_img);
temp_img = medfilt2(temp_img);
Train_faces{temp}=temp_img;
temp_label=strsplit(temp_name);
Train_labels{temp}=char(temp_label(1));
temp=temp+1;
end
end

disp('Extracting Features');
%%Extracting Features
for i=1:size(Train_faces,2)
sizeNormalizedImage = imresize(Train_faces{i},[150 150]);
Training_features(i,:)=extractHOGFeatures(sizeNormalizedImage);
end

disp('Training Model');
%%Training Classifier
faceClassifier = fitcecoc(Training_features,Train_labels);
save('classifier.mat','faceClassifier');

10
2.5. Result of Recognition

clc;
cam=webcam(1);
load classifier;
faceDetector = vision.CascadeObjectDetector();
Run=1;
while(Run)
img = snapshot(cam);
img=imresize(img,[480 640]);
bbox = step(faceDetector,img);
if(~isempty(bbox))
for i=1:size(bbox,1)
crop=imcrop(img,bbox(i,:));
crop=rgb2gray(crop);
crop=histeq(crop);
crop = medfilt2(crop);
sizeNormalizedImage = imresize(crop,[150 150]);
testimage=extractHOGFeatures(sizeNormalizedImage);
[prediction ,scores] = predict(faceClassifier,testimage);
disp(scores);
img = insertObjectAnnotation(img,'rectangle',bbox(i,:),prediction);
end
end
figure(1),imshow(img);
if get(gcf, 'CurrentCharacter')
disp('Key press')
key = get(gcf,'CurrentCharacter');
if(key=='q')
Run=0;

11
end
end
end

3. Result and analysis

The recognition rate of our program is about 80% due to the glasses because without

glasses change my identity. Note, this recognition rate does not include the rate of failing

the face detection. Therefore, the total success rate for our program is lower.

12
4. Remarks and Future Work

This is a really interesting project to work on. After this project is completed we are

suddenly realized that now we have the ability to create such tools ourselves. This feeling

is great.

However, there is still a long road to go for such technology to be officially

implemented in industry standards, because lightning differences, human internal

differences (some people like to make up/cover up their true self) all made things a lot

more complex than they should be.

Up to this point, face recognition systems are functioning in a normal way, but in

order to make it detect faces better, we need future work to be done. If you paint your

face so the dark/light pattern becomes less recognizable, the algorithm fails. This is

understandable because even for our human, it is hard to recognize someone with their

face painted.

Future work

What to improve on face detection


To improve the rate of face detection, we need to combine face detection and skin

tone mapping together to make better classifiers. Viola-Johns algorithm will only ensure

we have the object with the correct shading, but it does not necessarily be human.

Therefore to improve the result, a human skin classifier is necessary.

For those human skin tone classifiers, we need sample collected from all over the

world because the skin tones are all different.

13
What to improve on feature detection
Right now we are only using corner detection from the area that we are interested in.

This is not entirely accurate due to different lighting conditions and different scales of the

faces. In the future, if we have a programmable camera, we will program the camera to

get the database sample with the same scale as the real test objects.

To improve software interface


Although this is not necessary, we would like to have a software interface that

integrated the database creation/deletion/modification as well as real time face

detection/recognition together. In that way we will no longer need to type in command

line under development mode. This is required for commercial use, because people are

dumb.

14

You might also like