0 views

Uploaded by anantha

- Paper 3 Sigir2010 Guo Aclk
- IR
- Face Analysis for Improvised System Security using LASER Focus
- Image Thumbnail with Blur and Noise Information to Improve Browsing Experience
- Emotion Analysis by Facial Feature Detection
- A Fuzzy Domain Sentiment Ontology Based Opinion Mining Approach for Chinese Online Product Reviews
- Blinds Personal Assistant Application for Android
- shah
- 1806.03517
- 07307162
- Using Automatic Metadata Extraction to Build a Structured Syllabus Repository
- Ms 18 Ahmad Mehrbod. IPPC7Full
- Finding Articles
- Ontological-Based Information Extraction of Construction Tender Documents
- Trabalho Weka
- Waterlow Scale
- Stock Prediction
- ADESSA AMIA Presentation Jon Duke
- jmichels-writeup
- Stereo-biometrical interface in real time

You are on page 1of 30

Dr,S.ANANTHA PADMANABHAN

Professor Electronics Department

Sea College of Engineering Bangalore

Assistant Professor Electronics Department

Sea College of Engineering Bangalore

ABSTRACT

This paper splashes a method for effective face recognition system in the video sequence.

Minimizing the face recognition framework complexities is the vital reason of this paper. In our

suggested posture invariant face recognition technique, first database video clip is separated into

variant frames. Preprocessing is executed for every frame for removing the noise Gaussian

filtering.Face is identified from the preprocessed imageby using viola- Jones algorithm.

Thenfeatures are excerpted from detected face. Next, the excerpted feature value will present as

the input for the ANFIS classifier to train. Theparameters in ANFIS are maximized by artificial

bee colony algorithm for deriving accurate high recognition throughout the training.Likely the

features of query image are applied to test for analyzing our suggested recognition

systemperformance. So we receive notable image, concerning the threshold value. Hence, the

method will be executed in the working platform in MATLAB and the outcome will be examined

and matched with the prevailing techniques for demonstrating the suggestedvideo face

recognition method’s performance.

1. INTRODUCTION

Face recognition is very attractive holding extensive attention from the researchers owing to its

huge variety of applications, like law enforcement, video surveillance, homeland security and

also identity management. The face recognition task is executed non-intrusively, with no user’s

awareness or else explicit cooperation, the image sensing development methods should be

thanked. During the previous decades, face recognition’s performance is developing, but the big

pose variation issue still maintains without solved [1]. Face recognition is a technology of

biometric for recognizing an individual from the image or else a video utilizing statistical,

frequency domain or else spatial geometric features. Human faces can be identified by human

brain with some or no strive [2][3]. The best-noted face recognition techniques are fisher face

and Eigen face that is observed as the approaches based on global feature. Anyhow, while the

input image’s face pose is divergent from that of the sample images, hence, the exactitude of

these approaches slows down dramatically [4]. Also, Lots of current face recognizing approaches

is quite sensitized to the pose, occlusion, lighting, and aging. Due to varying appearance of a

person having different pose, it is the bottlenecks for most recent technologies of face

recognition. Hence, the conventional appearance based techniques of Eigen face [5], slow down

dramatically while non-frontal probes match against the enrolled frontal faces

The system of Face recognition mainly handles Image Acquisition, Pre-processing, Face region

detection and extraction, Feature extraction and then Classification regarding trained images.

The extra feature of pose evaluation is incorporated in the suggested system. Face recognition is

classified as per the features type of Appearance-based features plus Geometric-based features

[6]. In that, appearance-based features illustrate the face texture while Geometric based features

illustrate the shape. The recognizing rates of face recognition method are mainly influenced by

expression, illumination, and pose variations. The deficient recognition rates are provided by

occluded faces. [7] [8]. While we are conversing with face recognition posing issue based on a

video is also significant. Also, other challenges of expression, facial hair, aging, cosmetics and

then facial paraphernalia are also vital. Occlusion issue is one of the most vital issues of face

recognition. Managing partial occlusion is one of the most demanding problems amidst the

various issues associating with the system of face recognition. The occlusion problem with other

objects of sunglasses, masks or scarves turns eminent [9] [10].

Various algorithms are enhanced for face recognition derived of fixed viewpoints, however little

efforts are given for solving combined variation issue of illumination, pose, expression, etc [11].

Variant particular techniques are suggested presently. Among these, One may cite neural

networks[12], Elastic Template Matching [13], Karhunen-Loeve expansion, Algebraic moments,

Principal Component Analysis [14], Discriminant Analysis, Local Binary Pattern, and then

higher order Derivative Pattern [15]. Conventionally, the association of these particular methods

utilizing standard approaches also their relative performance is not represented well [16]. When

comparing with holistic methods, the feature based techniques have numerous benefits. They are

potent to dissimilarities in the pose, illumination, expression, occlusion, and localization

mistakes [17].

Mainly Face recognition is utilized person identification. A person is identified by his face in

common scenario. Considering a face to a machine is a complicated task. For classifying the

provided images with notable structured properties used generally in lots of the applications of

computer vision, Recognition algorithm is utilized [20]. These images hold some notable

properties of similar components of facial feature, similar distance betwixt facial candidate and

same eye alignment. Normal images are utilized in Recognition applications. Detection

algorithms identify the faces and separate face images including eyes, eyebrows, nose and

mouth. Thus, the algorithm is made more complicated rather than single detection process or

recognition algorithm [18]. The key benefit of face recognition has been the non-intrusiveness of

recognition in which the system assists for identifying even an uncooperative face in

unrestrained condition without the person’s knowledge. But every face recognition algorithms

meet a performance fall whenever there is a change in face appearances owing to the factors like

occlusion, expression, illumination, accessories, pose or aging [19].

The outline structure of the paper is organized as follows: Section 2 reviews the related works

with respect to the proposed method. In sections 3, a brief discussion about the proposed

methodology is presented, section 4 analysis the Experimental result and section 5 concludes the

paper.

2. RELATED WORK

Reecha Sharma et.al [21] they presented an efficacious pose invariant face recognition technique

utilizing PCA and ANFIS (PCA–ANFIS). The image aspects under test are unearthed utilizing

PCA plus neuro fuzzy regarded method ANFIS was utilized for recognition. The suggested

system speculates the face images through a variety of pose conditions by utilizing ANFIS. PCA

technique processes the training face image dataset for computing the score values that are

utilized later in the recognition procedure. The suggested technique of face recognition along

neuro-fuzzy system recognizes the input face images with the high recognition ratio.

KeshavSeshadriet.al [22] have suggested facial alignment algorithm which can jointly handles

the facial pose variation presence, partial occlusion of the face, and also varying illumination and

expressions. Their method proceeds to dense land from sparse which marks steps utilizing a

series of particular models skilled to the best account for the texture and shape variation

expressed by facial landmarks and facial shapes through pose and various expressions. They also

suggested the utilization of a novel 1-regularized least squares technique that we integrated into

their shape model that was a development across the shape model utilized by several prior Active

Shape Model (ASM) concerned facial landmark localization algorithms.

Manaret.al [23] have suggested an innovational system of generalized space curve representation

based on Frenet frame method for 3-D pose-invariant face and then facial expression recognition

and then classification. The elicitation of three-dimensional facial curves were done from frontal

or else synthetically posed 3-D facial data for deriving the suggested features based on Frenet

frame. The efficiency of the suggested method was estimated in two recognizing tasks: 3-D face

recognition (3D-FR) and also 3-D facial expression recognition (3D-FER) utilizing 3-D datasets

that is benchmarked. The provided framework presents 96% rank-I recognition rate for the 3D-

FR and then 91.4% area through ROC curves for the six basic 3D-FER.

Jian Yang et.al [24] have presented an error model based on two-dimensional image-matrix,

named, nuclear norm based matrix regression (NMR), for face representation and also

classification. NMR utilizes the nominal nuclear norm of representation error image as a

benchmark, and the alternating direction method of multipliers (ADMM) for computing the

regression coefficients. We additionally improve a fast ADMM algorithm for solving the

proximate NMR model then exhibit that it contains a quadratic rate of convergence. We

investigate utilizing five famous face image databases: the Extended Yale B, AR, EURECOM,

Multi-PIE and FRGC.

Ying Tai et.al [25] have presented the orthogonal Procrustes problem (OPP) as a technique for

handling pose variations prevailed in 2D face images. An optimal linear transformation is sought

by OPP betwixt two images with variant poses for making the transformed image fits best the

other one. OPP is incorporated by them into the model of regression and suggest the orthogonal

Procrustes regression (OPR) system. For addressing the concern of the unsuitability of linear

transformation for dealing highly non-linear pose variation, a progressive strategy is further

adopted by them and suggests the stacked OPR. Also, as a practical framework,

faces with the particular aims of comparing, matching, and then averaging their shapes. The

facial surfaces representation with radial curves emerging from the tips of nose utilizes elastic

shape investigation of these particular curves for developing a Riemannian framework then

scrutinizing shapes of complete facial surfaces., With elastic Riemannian metric, that

representation seems inartificial to measure facial deformations also was powerful for

challenges of huge facial expressions (particularly for those having open mouths), missing parts ,

partial occlusions and huge pose variations because of glasses, hair, and all. This framework was

presented to be assuring from empirical and also theoretical-perspectives.

3. PROPOSED METHODOLOGY

The suggested posture invariant face recognition method springsthrough the stages mentioned

below;

Preprocessing

Gaussian filter

Face detection

Viola jones algorithm

Haar like features selection

Creating integral image

Adaboost training algorithm

casecaded classifiers

Features extraction

Classification

ANFIS classifier

Optimization

o ABC algorithm

Fig.1Posture invariant face recognition stages

Fig.2Block schematic of Posture invariant face recognition

Recognized

Non-Recognized

Classifcaton

ANFIS classifer

Features extracton algorithm

Optmizaton ABC

Face detecton

Viola jones algorithm

Preprocessing

[Gaussian flter]

Initially, conversion of the input video into a number of image frames is done. I have beenpre-

processing for removing the noise. For this removing noise,Gaussian filtering is applied in the

preprocessing state. From the pre-processed image,face is identified utilizing viola jones

algorithm. And, Image features ofcontrast, energy, correlation, homogeneity, maximum

probability,entropy, cluster shade,cluster prominence, local homogeneity, sum of squares or

variance, dissimilarity, autocorrelation and also inverse difference moment are excreted once the

face detection is done in images. The following block diagram in figure

3.1 PREPROCESSING

Initially, the processing ofinput image for extracting the features demonstrates its contents. For

removing the noise,the processing comprises filtering.In the suggested work preprocessing of

input image is done by applying Gaussian filter.

Gaussian filter utilizes to eliminate Gaussian noise.Here, the Gaussian Smoothing operator

functions the Gaussian distribution based weighted average of the surrounding pixels. The

weightsprovide higher importance to pixels close to the edge (minimizes edge blurring).

Moreover, the degree of smoothingis limited by σ (bigger σ for more intensive smoothing).

Sigma signifies the blurring amount. The radius slider is utilized for restricting how huge the

template is. Huge values for the sigma will only offer huge blurring for huge template sizes.

Noise is appended utilizing the sliders. Thus,Gaussian filtering is utilized for blurring the images

and eliminating noise with detail.The Gaussian function is mentioned below:

a 2 +b 2

−( )

1 2 σ2

g( a , b )= ⋅e

2 πσ 2 (1)

for having a mean of 0.For erasing Gaussian noise,Gaussian smoothing is very efficient.

3.2 OBJECT DETECTIONUTILIZINGVIOLA JONES ALGORITHM

identification. The human face descriptors have been recognized as an object. By Viola Jones

method means,the object detection is implemented.

Viola and Jones offer the speed and effective way foridentifying a face in the provided image.

It is concernedHaar-like featuresplus cascade Ada-Boost classifier. This is the no 1 face

detectionframework having the capacity of presenting real timeperformance. Hence, lot of image

processing applications that need faces as their input are building utilizing thisalgorithm and it is

the most generally utilized face detectionalgorithm. This algorithm significantly comprises the

following stages.

Creating Integral Image

Adaboost Training algorithm

Cascaded Classifiers

In this we introduce all these ideasbelow in essenceand explain them elaborately in the following

parts.

V RF = [ ∑ P A ( black)−∑ P A ( white) ]

(2)

P A (black ) P A (white)

Where, V RF is the rectangular features value, is the pixels in black area, is

the pixels in white area.

A B

C D

The Viola-Jones technique utilizedthe AdaBoostalgorithmvariation, created by Freund and

Schapire, forselecting a minimalset of crucial features to create an efficientclassifier. The training

data must haveimages through lighting conditionsrange and facialproperties for the best results.

evaluation. Here, the detection system is not functioningwith image intensitiesdirectly. Our

object detectionprocess categorizes images regarding simple features value. The integral image is

calculated from an image utilizing some operations for one pixel.Any one of these Haar-like

components is calculated at any location or scale in stabletime.

The integral image at location (x,y) comprises the total of thepixels above and to the left of x,y

inclusive,

Ii ( x, y)= ∑ i( x , y)

x' < x' , y' < y' (3)

(x,y)

3.2.4 Adaboostalgorithm

The framework of object detection engages a learning algorithm Adaboost for both selecting the

best featuresand also training classifiers that utilize them. Hence,this algorithmbuilds a

powerfulclassifier as aweighted simple weakclassifiers’ linear combination.

N

h( x )=sign ( ∑ β j h j (x ))

j=1 (4)

Fj based thresholdfunction. A weak classifier

inequality sign:

h j ( x )=

−T j if F j <θ j

T j if otherwise

¿

¿ {¿ ¿ ¿ (5)

In which, threshold

θ j and T j are assured in the training and the coefficients β j and x is a

24-by-24 sub-window of an image.

3.2.5Cascade architecture

Viola and Jones used a series of increasingly more complex classifiers named a cascade for

improving computational effectiveness and also minimize the wrong positive rate. Evaluation of

an input window is done on the initial classifier in the cascade and if that classifier arrives

false,the computation on that window completes and the detector arrivesfalse. But, if the

classifier reappears true, the window is gone to the following classifier of the cascade. Thus, if

the window undergoes each classifier with every returning true, the detector arrives true for that

window. The more the window appears like a face, then the more classifiers are analyzed on it

and the longer it consumes time to categorize that window. As most windows of an image may

not resemble faces, most are rapidly disowned as non-faces.The process of classification inviola

jones algorithm is illustrated in the below Figure 5.

No face found

FailFailFail

window 1 2 … N

Pass PassPass

Face found

Fig.5 Classification process in Viola-Jones algorithm.

Feature extraction is modifying the object detected image into the series of features. Features

ofcontrast,correlation, entropy, energy, local homogeneity, maximum probability, cluster shade,

sum of squares, homogeneity,cluster prominence,variance, dissimilarity, autocorrelationand

inverse difference momentare utilized for describing the image content.

3.3.1 Contrast ( C ): It Returns the intensity measure contrast betwixt a pixel and its neighbor

through the whole image. And, contrast is 0 for a constant image.

a ,b (6)

In which,

Cluster shade has been a measurement of the asymmetryof the matrix and is hopedto gauge the

G−1 G−1

Shade= ∑ ∑ { a+b−μ x−μ y }3× p( a ,b )

a=0 b=0

(7)

Cluster prominence has been also aasymmetry measure.While the cluster prominence value is

G−1 G −1

P= ∑ ∑ {a+ b−μ x −μ y }4 ×p (a , b)

a=0 b=0

(8)

M

e= ∑ P (a , b) log 2 { P(a , b ) }

b=0 (12)

3.3.11 Variance( V ): It is the measures that signifying how much the gray level aregetting

varied from the mean.

2

V =∑b ∑ a P (a , b) P(a , b )−μ

(13)

Exerted features deriving of the preprocessed images are maximized utilizing artificial bee

colony (ABC) algorithm.Thisalgorithmcomprises three groups: onlooker bees,scouts and

employed bees. Consonantly, in the optimization framework, the food sources’numberin ABC

algorithm portrays the number of findings in the population. A good food source signifiesthe

promising solutionposition to the optimization issue and the food source quality represents the

associated solution’s fitness cost. ABC optimization algorithm process are explained below as,

Initialization Phase: The food sources, whose size of populationis S ,are randomly created by

¿

scout bees. Every food source, denoted by q(x n ) is an input vector for the optimization issue,

¿

x n contains d variables and d is the searching spacedimension of the objective function

for being optimized. The first food sources are randomlyproduced through the expression

equation (14)

(14)

In which

U i and Li have been the upper and lower bound of the solution space of objective

function, rand (0,1 ) has been a random number within the range [0, 1].

Fitness calculation

The food sources fitness is significant for finding the global optimal. The fitness is computed by

the formula (3) mentioned below, after that a greedy selection is utilized betwixt

θmi and N m .

1 ¿

¿ , f m( q ( x n ) ) > 0

1+ f m ( q ( xn ) )

¿

1 +|f m ( q ( x n ) )|, f m ( q ( x ¿

n ) )< 0

¿

righ

¿

¿

¿

(¿ ) ¿

¿

Fit m ( q ( x n ) )=¿ ¿

¿ (15)

¿ ¿

In which f m( q( x n )) is the objective function value of q(x n )

Employed bee drifts to a food source then detects a new food source among the food

sourceneighborhood. The employed bees memorize the greater quantity food source. The food

source information saved by the employed bee is shared along with onlooker bees. A neighbor

food source

N mi is regulated and computed by the followed equation (14),

(16)

x k is a randomly chosen food source, θmi is a

random number among the range [-1,1].This parameter range may make suitable adjustment on

partcular issues.

Onlooker Bee Phase: Onlooker bees compute the food sources’ profitability by perceiving the

waggle dance in the dance area then choose a greater food source randomly. Then onlooker bees

execute randomly search in thefood sourceneighborhood. The food source quantity is estimated

by its profitability plus the profitability of all food sources. Here,Pm is determined by the

following formula

¿

Fit m (q ( x n ))

probability ( p m )= S

∑ Fit m (q (x ¿n ) )

m=1 (17)

¿ ¿

In which, Fit m( q( x n )) is the fitness of q(x n ) .

Onlooker bees examine the neighborhood of food source as per the expression (16),

¿

N m=N mi +θ mi (q( x ni )−x ki )

(18)

Scout Phase: If the food sourceprofitability cannot be enhanced and the unchanged times are

greater than the predetermined number of trials, which named "limit", the solutions will be

abandoned by scout bees. Next, the innovative solutions

Computearethe

randomly looked for the scout bees.

Start Initalizaton ftness functon

¿

Discovering the new solution q(x n ) is doneby the scout by applying the expression (17),

(19)

Is

ftness

rand (0,1 ) is an random number within the range [0,1], U i and Lminimu

i are the upper and lower

m?

bound of the solution space of objective function.The flow diagram of ABC algorithm is

presented in figure 6.

Employed bee

phase

Onlooker bee

phase

YES

If maximum

NO Scout bee phase

iteraton

reached?

Stop

soluton

NO

YES

The data set is classified into two categories. They are training data plus testing data. The

training data set comprises images of all the types.The optimized features are exerted and

compared with the best available solution, in the testing process

q( ⃗x1 ) ,

q( ⃗

x2 ) ,

q( ⃗

x n) are derived from the optimization of ABC

and areclassified utilizing the well-known classifier called ANFIS which comprises five layers of

nodes.In the five layers, the first with the fourth layers has adaptive nodes in which the second,

third and fifth layers holds fixed nodes.TheANFIS architectureis provided in figure 7. Utilizing

the ANFIS,the features are categorized. The Rule fundamental of the ANFIS is of the form:

If

q( ⃗x1 ) is

A i , q( ⃗

x2 ) is B i and q( ⃗

x n) is Ci

Rules i =ai∗q( ⃗

x 1 )+bi∗q ( ⃗

x 2 )+ci∗q ( ⃗

x n )+f i

(20)

In which,

q( ⃗x1 ) ,

q( ⃗

x2 ) ,plus

q( ⃗

x n) are the inputs,

A i , B i , and Ci are the fuzzy sets,

Rulesi is the output among thespecified fuzzy region by the fuzzy rule, ai , bi , and c i

are the design parameters which are determined by the training method. The ANFISarchitecture

is provided in figure

Layer 1

Layer 4

q( ⃗x1 ) A1

Layer 2Layer 3

q( A

⃗x1 ) 2 q( ⃗

x2 ) q( ⃗

x n)

wt 1

wt 1 ⃗

Layer 5 B1 N

q( ⃗

x2 ) wt 1

⃗

Y B2 N

wt 2

⃗ wt 2

⃗

q( ⃗x1 ) q( ⃗

x2 ) q( ⃗

x n)

C1

q( ⃗

x n)

C2

Layer-1: Each node i in this layer is a square node with a node function.

O1,i=μ Ai ( q(⃗

x 1 )),O1,i =μBi (q (⃗

x2 )) , O1,i =μCi (q (⃗

x n )) (21)

Generally

μ Ai (q( ⃗

x 1 )) ,

μBi (q(⃗

x2 )) , plus

μCi (q(⃗

x n )) are selected to be bell-shaped with

optimum equal to 1 and minimum equal to 0 and are defined as,

1

μ Ai ( q( ⃗

x 1 ))=μ Bi ( q( ⃗

x 2 )) =μCi ( q ( ⃗

x n ))=

2 qi

1+

[( ) ]

x−o i

pi

(22)

Where,

oi , p i , q i is the parameter set. These parameters in this layer are mentioned to as

premise parameters.

Layer-2: Every node of this layer is a circle node labeling ˘which multiplies the incoming

signals and sends the product out. For example,

O2,i=wt i =μ Ai (q(⃗

x 1 ))×μ Bi (q(⃗

x 2 )) ×μ Ci (q(⃗

x n )), i=1, 2 (23)

th

Layer-3: Each node in this layer has been a circle node labeled N . The i node calculates

th

the ratio of the i rules firing strength to the sum of all rule’s firing strengths:

wt i

O 3,i=wt i = , i=1, 2

( wt 1 +wt 2 ) (24)

th

Layer-4: Every node i in this layer is a square node with a node function

Where

wt i is the output of layer 3 and ai , bi , c i , f i are the parameter set. Parameters

in this layer will be referred to as consequent parameters.

Layer-5: The single node in this layer is a circle node labeled Σ that computes the overall

output as the summation of all incoming signals:

∑i wt i Rulesi

O 5, i=∑ wt i Rules i=

i ∑i wt i (26)

wt 1 Rules 1 + wt 2 Rules 2

f

n(o i ) = wt 1 + wt 2 (27)

f

n( oi )=wt Rules1 +wt Rules 2 (28)

Accordinglythe obtainedfeature is classified by the ANFIS and then the classified feature is

f

signified as n(Oi ) . Next the predefined threshold value ω plus the outcome of the neural

network is ( Y ).The neural network output Y greater than the threshold value ω means,

the provided input image is acknowledged and Y less than the threshold value ω means the

image is not acknowledged.

The face recognition system is implemented in the Mat Lab 14a software and executed in a

personal computer which comprises Intel (R) Core (TM ) i3 processor of 2.40 GHz CPU and 4

GB RAM. Face recognition procedure is analyzed with dissimilar video frames and the outcome

of the intended system is illustrated below.

(a)

(b)

In Figure 8, the training phase acquires the input as multiple objects dataset for recognizing and

for non-recognizing the images, the input dataset is transformed into a frame to undergo

preprocessing.

(a)

(b)

Originally the videos are converted into frame; the preprocessing has performed the RGB to grey

conversion for the recognized and non-recognized images with the help of a Gaussian filter as

illustrated in Figure9(b). Then the preprocessed images extract their frontal faces by means of the

viola jones algorithm

(a) (b)

Fig. 10 Frontal faces of (a) recognized and (b) non-recognized training images

(a)

(b)

Fig. 11 Input Testing Dataset for recognized (a) and non-recognized (b) images

In Figure11, the testing phase acquires the input as multiple objects dataset for recognizing and

non-recognizing the images, the input data dataset is then transformed into a frame for

preprocessing.

(a)

(b)

Fig. 12 Preprocessing for recognized (a) and non-recognized (b) testing images

Originally the videos are converted to frame, the preprocessing has carried out for RGB to grey

conversion for the recognized and the non-recognized images by means of the Gaussian filter as

illustrated in Fig 1(b). Then the preprocessing RGB frame has extracts the frontal face by means

of viola jones algorithm.

(a) (b)

Fig. 13 frontal faces of (a) recognized and (b) non-recognized testing images using viola

jones

The Viola and Jones face detector is carried out locally on every preferred bounding box around

the associated pixel regions on an image. The frontal faces are identified as recognition and non-

recognition image by means of viola jones algorithm as illustrated in the figure. The features are

then extracted for the categorization of intended ANFIS based on Precision, recall, accuracy,

sensitivity, specificity, FM, FDR, FNR, FPR, FAR, FRR, MCC. The performance analysis of our

proposed ANFIS classification technique is related with our conventional Neural Network, and

KNN.

5. PERFORMANCE ANALYSIS

The performance analysis of the intended technique of face recognition is analyzed by using the

statistical measures illustrated below.

Precision

The fraction of images recognized which are appropriate to the query image is termed as

precision.

TP

precision=

TP+ FP (29)

Recall

Recall ascertains the fraction of images which are appropriate to the query images that are

effectively recognized.

TP

recall=

TP+ FN (30)

F-Measure

precision×recall

FMeasure=2

precision+ recall (31)

Accuracy

Accuracy computed the closeness of the recognized image to the query image.

( TP+ TN )

Accuracy =

( TP+ FP+ TN + FN ) (32)

Sensitivity

It measures the proportion of images which are appropriate to the query images that are

effectively recognized.

TP

Sensitivity=

TP+FN (33)

Specificity

Specificity measures the proportion of images which are appropriate to the query images that are

not perfectly recognized.

TN

Specificity=

FP+TN (34)

It is described as the expected proportion of false negatives amid the entire hypothesis discarded.

FP

FDR=

FP+TP (35)

False Negative Rate

It the ratio of the number of positive events wrongly categorized as positives to the total no of

actual positive events.

FN

False Negative Rate=

FN +TP

(36)

It the ratio of the number of negative events wrongly categorized as positives to the total no of

actual negative events.

FP

False Positive Rate=

FP+TN (37)

False acceptance rate and false rejection rate is defined as false positives and false negatives

correspondingly.

FAR=FP

FRR=FN (38)

MCC considered the true and false positives and negatives and is regarded as a balanced measure

which can be employed even when the classes are of extremely dissimilar sizes.

MCC=

√(TP+ FP )(TP+FN )(TN+ FP )(TN + FN ) (39)

Object detection and recognition process is analyzed with different video frames and the

outcomes of the intended system have been tabulated below.

Performance Proposed

Neural Network KNN

measure ANFIS

precision 95 81 94

Recall 100 95.3 98.9

F measure 97.4 87.6 96.4

Accuracy 96.9 85.8 95.7

Sensitivity 100 95.3 98.9

Specificity 92.5 75.3 91

FDR 5 19 6

FNR 0 4.71 1.05

FPR 7.46 24.7 8.96

FAR 5 19 6

FRR 0 4 1

MCC 93.8 72.6 91.2

Table 1: Performance table for classification

The above comparison table illustrates that our intended ANFIS is more than the conventional

classification approach called Neural network and KNN based upon Precision, recall, accuracy,

sensitivity, specificity, FM, FDR, FNR, FPR, FAR, FRR, MCC. The comparison graph is

illustrated below.

In figure 14, the comparison graph illustrates that our intended approach ANFIS is better than the

conventional classification approaches of Neural network and KNN. Our proposed technique

yields high precision of 95% and 100% on recall and sensitivity , accuracy of 96.9%, specificity

of 92.5% , F Measure of 97.4%, FDR of 5%, FNR of 0%, FPR of 7.46%, FAR of 7.46%, FRR

of 0% and MCC of 93.8%.

6. CONCLUSIONS

Preprocessing is done by utilizing Gaussian filter in the technique of face invariant face

recognition and the face is identified effectively via the viola jones algorithm. ABC optimized

ANFIS classifier utilized for the image classification as recognized or non-recognized. In

performance analysis Our suggested segmentation method is investigated utilizing different

performance metrics like precision(95%), recall(100%), f-measure(97.4%), sensitivity(100%),

specificity(92.5%), accuracy(96.9%), false discovery rate(5%), false negative rate(0%), false

positive rate(7.46%), false acceptance rate(7.46%), false rejection rate(0%), Mathew’s

correlation coefficient(93.8%).The comparison result has been proved that our suggestedANFIS

gives better resultsthan the prevailingneural network and KNN.

REFERENCES

[1] Sang, Gaoli, Jing Li, and Qijun Zhao, "Pose-invariant face recognition via RGB-D images",

Computational intelligence and neuroscience, No.13, 2016.

[2] Patil, Hemprasad, Ashwin Kothari, and KishorBhurchandi, "Expression invariant face

recognition using semi-decimated DWT, Patch-LDSMT, feature and score level fusion", Applied

Intelligence, Vol.44, No.4, pp.913-930, 2016.

[3] Chen, Qiu, Koji Kotani, Feifei Lee, and TadahiroOhmi. "An Improved Face Recognition

Algorithm Using Histogram-Based Features in Spatial and Frequency Domains." World

Academy of Science, Engineering and Technology, International Journal of Computer, Electrical,

Automation, Control and Information Engineering, Vol.10, No.2, pp.360-364, 2016.

[4] Nitin Sharma, RanjithKaur, “Review of Face Recognition Techniques”, International Journal

of Advanced Research in Computer Science and Software Engineering, Vol.6, No.7, 2016.

[5] Zhang, Jian, Jinxiang Zhang, and Rui Sun. "Pose-invariant face recognition via SIFT feature

extraction and manifold projection with Hausdorff distance metric", In Security, Pattern

Analysis, and Cybernetics (SPAC), pp. 294-298, 2014.

[6] Ali, Asem M, "A 3D-based pose invariant face recognition at a distance framework", IEEE

Transactions on Information Forensics and Security, Vol.9, No.12, pp.2158-2169, 2014.

[7] Muruganantham, S., and T. Jebarajan, "An Efficient Face Recognition System Based On the

Combination of Pose Invariant and Illumination Factors", International Journal of Computer

Applications, Vol.50, No.2, 2012.

[8] Azeem, Aisha, Muhammad Sharif, MudassarRaza, and MarryamMurtaza, "A survey: face

recognition techniques under partial occlusion", Int. Arab J. Inf. Technol, Vol.11, No.1, pp.1-10,

2014.

[9] Khadatkar, Ashwin, Roshni, Khedgaonkar, and Patnaik, "Occlusion invariant face recognition

system", In Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave), ,

pp. 1-4, 2016.

[10] Sindhuja, A., S. Devi Mahalakshmi, and K. Vijayalakshmi, "Age invariant face recognition

with occlusion", In Advanced Communication Control and Computing Technologies

(ICACCCT), IEEE, pp.83-87, 2012.

[11] Sharma, Poonam, Ram N. Yadav, and Karmveer V. Arya, "Pose-invariant face recognition

using curvelet neural network", IET Biometrics, Vol.3, No.3, pp.128-138, 2014.

[12] Ji, Shuiwang, Wei Xu, Ming Yang, and Kai Yu, "3D convolutional neural networks for

human action recognition", IEEE transactions on pattern analysis and machine intelligence,

Vol.35, No. 1, pp.221-231, 2013.

[13] Drira, Hassen, Boulbaba Ben Amor, AnujSrivastava, Mohamed Daoudi, and Rim Slama,

"3D face recognition under expressions, occlusions, and pose variations", IEEE Transactions on

Pattern Analysis and Machine Intelligence, Vol.35, No.9, pp.2270-2283, 2013.

[14] Sharma, Reecha, and M. S. Patterh, "A New Hybrid Approach Using PCA for Pose

Invariant Face Recognition", Wireless Personal Communications, Vol.85, No.3, pp.1561-1571,

2015.

[15] Agrawal, Amrit Kumar, and YogendraNarain Singh, "Evaluation of Face Recognition

Methods in Unconstrained Environments", Computer Science, Vol.48, pp.644-651, 2015.

[16] Beham, M. Parisa, and S. Mohamed MansoorRoomi. "Face recognition using appearance

based approach: A literature survey." In Proceedings of International Conference & Workshop on

Recent Trends in Technology, Mumbai, Maharashtra, India, Vol. 2425, p. 1621. 2012.

[17] Meena, K., A. Suruliandi, and R. Reena Rose, "An Illumination Invariant Texture Based

Face Recognition", ICTACT Journal on Image and Video Processing, Vol.4, No.2, 2013.

[18] Makwana, Kaushik R. "A Survey on Face Recognition Eigen face and PCA method."

International Journal, Vol.2, No.2, 2014.

[19] Shermina, "Impact of locally linear regression and fisher linear discriminant analysis in pose

invariant face recognition", International Journal of Computer Science and Network Security,

Vol.10, No.10, 2010.

[20] Rath, Subrat Kumar, and SiddharthSwarupRautaray, "A Survey on Face Detection and

Recognition Techniques in Different Application Domain", International Journal of Modern

Education and Computer Science, Vol.6, No.8, 2014.

[21] Sharma, Reecha, and M. S. Patterh. "A new pose invariant face recognition system using

PCA and ANFIS." Optik-International Journal for Light and Electron Optics, Vol.126, No.23

pp.3483-3487, 2015.

[22] Seshadri, Keshav, and MariosSavvides, "Towards a Unified Framework for Pose,

Expression, and Occlusion Tolerant Automatic Facial Alignment", vol. 38, no. 10, pp. 2110-

2122, 2015

[23] Amad, Iftekharuddin, "Frenet Frame-Based Generalized Space Curve Representation for

Pose-Invariant Classification and Recognition of 3-D Face," in IEEE Transactions on Human-

Machine Systems, Vol. 46, No. 4, pp. 522-533, 2016.

[24] J. Yang; L. Luo; J. Qian; Y. Tai; F. Zhang; Y. Xu, "Nuclear Norm based Matrix Regression

with Applications to Face Recognition with Occlusion and Illumination Changes," in IEEE

Transactions on Pattern Analysis and Machine Intelligence , 2016.

[25] Y. Tai, J. Yang, Y. Zhang, L. Luo, J. Qian and Y. Chen, "Face Recognition With Pose

Variations and Misalignment via Orthogonal Procrustes Regression," IEEE Transactions on

Image Processing, Vol. 25, No. 6, pp. 2673-2683, 2016.

[26] H. Drira, B. Ben Amor, A. Srivastava, M. Daoudi and R. Slama, "3D Face Recognition

under Expressions, Occlusions, and Pose Variations," IEEE Transactions on Pattern Analysis and

Machine Intelligence, Vol. 35, No. 9, pp. 2270-2283, 2013.

- Paper 3 Sigir2010 Guo AclkUploaded byNicoara Talpes
- IRUploaded byopensuse1
- Face Analysis for Improvised System Security using LASER FocusUploaded byIRJET Journal
- Image Thumbnail with Blur and Noise Information to Improve Browsing ExperienceUploaded byAI Coordinator - CSC Journals
- Emotion Analysis by Facial Feature DetectionUploaded byInternational Journal of Innovations in Engineering and Science
- A Fuzzy Domain Sentiment Ontology Based Opinion Mining Approach for Chinese Online Product ReviewsUploaded bynayeem074413
- Blinds Personal Assistant Application for AndroidUploaded byIJRASETPublications
- shahUploaded bygupta
- 1806.03517Uploaded bypaulo
- 07307162Uploaded byprachi
- Using Automatic Metadata Extraction to Build a Structured Syllabus RepositoryUploaded byManas Tungare
- Ms 18 Ahmad Mehrbod. IPPC7FullUploaded byippc 7
- Finding ArticlesUploaded bycobyarocs21
- Ontological-Based Information Extraction of Construction Tender DocumentsUploaded byCarla Galluccio
- Trabalho WekaUploaded bycaioph
- Waterlow ScaleUploaded bytimie_reyes
- Stock PredictionUploaded bygreg
- ADESSA AMIA Presentation Jon DukeUploaded byjduke99
- jmichels-writeupUploaded bySivashen Reddy
- Stereo-biometrical interface in real timeUploaded byanon-753138
- rubricUploaded byapi-347440938
- Named Entity Recognition for Question AnsweringUploaded byFajar Wicaksono
- westgard-rules-and-multirules.pdfUploaded byPetrescu Mihai
- Sw Tip Aglt 1009 Polyworks EnUploaded byRadu Bolboaca
- Diabetes Prima WekaUploaded by'Aamer Anwer Hayat Khan'
- Paper 17-Retrieval of Images Using DCT and DCT Wavelet Over Image BlocksUploaded byEditor IJACSA
- assignmentUploaded byIanne Mary Torino
- Tools to Detect StuntingUploaded byYunita Andriani
- Article BiometricsUploaded byCristina Pop
- Ocr 12990 Sm Gce Teach SupUploaded byHsd Singh

- An Efficient Pose Invariant Face Recognition System With the Aid of ABC Optimized AnfisUploaded byanantha
- 165Uploaded byanantha
- Video Quality (05-Jan-2011).docxUploaded byanantha
- 165Uploaded byanantha
- AN EFFICIENT POSE INVARIANT FACE RECOGNITION SYSTEM WITH THE AID OF ABC OPTIMIZED ANFIS1.docxUploaded byanantha
- Padmanabhan-Kanchikere2019 Article AnEfficientFaceRecognitionSystUploaded byanantha

- D F TUploaded byArpit Varshney
- ITC Financial Statement AnalysisUploaded byMathusoothanan Selvan
- Charles Ellis Montessori Academy Family Handbook 2016-2017Uploaded byjazmin
- EMC_SAS_VMAX_Best_Practices.pdfUploaded byvijay ch
- Industrial ApplicationUploaded byJustin Marc Estipona
- Glyconutrient Technology for DoctorsUploaded byapi-3771333
- Fundamentals of Mechanical DesignUploaded bybasarica
- tutoring-lesson plan iiiUploaded byapi-308212648
- Personality DevelopmentUploaded bymelody landicho
- whoarethegypsylorists matras.pdfUploaded byMikey Stanley
- Solutions to Revision (1)Uploaded byKZ
- James Thomas - 2001Uploaded byNoman Shahzad
- GayaHidup August 2010Uploaded byElena Doina Cristache
- SFMProblemsUploaded byChaitanya Mulukutla
- [Hizb-ut-tahrir.org] the Punishment System # Abdur-Rahman Al-MalikiUploaded byآركان الدين ياسين
- Risk Based InspectionUploaded bylinglom
- FABRICATION OF A MAXILLARY SPLIT COMPLETE DENTURE FOR A POST-MENOPAUSAL EDENTULOUS PATIENT WITH XEROSTOMIA – A CASE REPORTUploaded byInternational Educational Applied Scientific Research Journal (IEASRJ)
- Filipinas Port vs Go_DigestUploaded byArmand Patiño Alforque
- Tax 2 Remedies Digests KoUploaded byMary Ann Lee
- Man Hour EstimateUploaded byDana Guerrero
- Verve StaffAppraisal CaseStudy V1Uploaded bygolimar18
- porfolio nfdn 2005Uploaded byapi-318906225
- Q1Uploaded by29_ramesh170
- Entrepreneurship and Small Business ManagementUploaded byDon Grema Othman
- + SQL Injection Detection and Prevention System with Raspberry Pi Honeypot Cluster for Trapping AttackerUploaded byCalvin Diong
- Adding a Node to a 10g RAC Cluster.pdfUploaded byPraveen Kumar
- Titanic 1Uploaded bySLAVEFATHER
- WarriorServiceManual-WO3646Uploaded byAndrey Kuzhilny
- The Scope of Corporate FinanceUploaded byooitzandyoo
- General Notes on Absalom and Achitophel Through HobbesUploaded bytadmoroseboy