You are on page 1of 7

IPASJ International Journal of Computer Science (IIJCS)

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


Email: editoriijcs@ipasj.org
ISSN 2321-5992

A Publisher for Research Motivation ........

Volume 4, Issue 9, September 2016

GAIT RECOGNITION BASED ON LDA AND ITS


VARIANTS
Kai Li, Guochao Wang
College of Computer Science and Technology,Hebei University, Baoding, China,071000

ABSTRACT
The linear discriminant analysis, for short LDA, is a commonly used method of feature extraction, which has an important
application in pattern recognition field. This paper analyzes some methods of feature extraction based on LDA and its variants, and
also gives a variant of PCA method. Then, on the basis of human gait energy image, some methods of feature extraction including
DLDA, RLDA, PCA + LDA and v-PCA + RLDA and gait recognition are studied. In experiments, gait databases from Chinese
Academy of Sciences and minimum distance classifier are selected to verify the performance of feature extraction on gait
recognition.

Keywords: gait recognition, LDA, variants of LDA, minimum distance classifier

1. INTRODUCTION
Gait recognition is a method of identification based on human walking posture, which has some advantages, such as noncontact, difficult to hide and disguise, easy acquisition, remote sensing, etc.. And it can be carried out without disturbing
others. So, gait recognition is one of the promising methods in biometric, which soon attracts the attention of researchers.
Since then, researchers present many methods of gait recognition, wherein feature extraction using human gaits contour is
more commonly used method. For example, Mohan et al [1] proposed gait recognition method based on local binary
pattern, which is applied to gait energy image(GEI) and region bounded by legs of gait energy image. And they obtained
result of recognition based on similarity between test image and training image. Guo et al [2] proposed a gait features
selection method based on mutual information. They choose subset of features with the bigger mutual information to
constitute gait features and then use support vector machine(SVM) as classifier to recognize. In experiment, they achieved
the better recognition results at Southampton HiD gait database. Guan et al [3] proposed a method of feature extraction by
using two-dimensional principal component analysis and linear discriminant analysis (LDA) based on gait energy image.
Then they used minimum distance classifier to classify and achieved better results on the USF. However, performance of
this method dropped sharply when faced with interference of external factors. So, this method is not robust. Moreover, it is
known that LDA [4] is a commonly used method of feature extraction, which obtains the optimal projection direction by
maximizing fisher criterion function so that the samples in the projection direction achieve maximum between-class scatter
and the minimum within-class scatter. On the basis of fisher idea, the researchers propose the concept of discriminant
vectors, which are used to span subspace in which the original samples are projected. In addition, the researchers proposed
F-LDA method [5] by weighting input space and obtained better performance in low-dimensional space. However, when
faced with high-dimensional space, this method needs a large amount of computation and suffer from small sample size
problem. To solve above problem, the researchers first reduced dimensionality by using PCA method, and then LDA is
applied to the reduced space in order to make within-class scatter matrix invertible. However, this approach may discard
some useful information contained in null space [6-7]. To this end, researchers proposed DLDA method [8], which directly
finds the optimal discriminant vectors in the high-dimensional space to avoid the loss of important discriminant
information in null space. In addition, Lu et al. [9] proposed RLDA method, in which feature vectors of null space is viewed
as the most important feature base and controls the deviation and variance of eigenvalue of within- class scatter matrix.
This paper studies the method of feature extraction based on LDA and its variants combined with gait recognition problem
using human gait energy image. Meanwhile, we also give a variant of PCA method. And we study the performance of gait
recognition using minimum distance classifier based on extracted gait features.

Volume 4, Issue 9, September 2016

Page 10

IPASJ International Journal of Computer Science (IIJCS)


Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
Email: editoriijcs@ipasj.org
ISSN 2321-5992

A Publisher for Research Motivation ........

Volume 4, Issue 9, September 2016


2. FEATURE EXTRACTION OF GAIT WITH PCA AND LDA

Assume that Z {Zi | i 1, 2,, C} is a set of gait with number of samples N and number of class C,
n
where Zi {Zij | Zij R , j 1,2,, Ni} represents a set with ith class samples, Ni is number of samples belonging to Zi and
N1+N2++NC=N. Here, it is noted that Zij is a gait energy image.
2.1 PCA method
Feature extraction based on PCA mainly finds a set of optimal orthogonal vector base through transform. First, the total
scatter matrix is calculated using gait sample set. Then, eigenvalue equation is solved to get each principal component
direction. Finally, some principal components are chosen as new features of samples and samples are projected on the
selected principal components.
*
T
In the PCA method, the used criterion function is W arg max | W StW | , where St is the total scatter matrix of the gait
W

sample, which is defined as

St

1 N
( Z i Z )( Z i Z )T ,
N i 1

(1)

where Z 1 N Z i .
i

Suppose that all eigenvalues of total scatter matrix St are 1 , 2 ,..., n , respectively, where 1 2 ... n . Eigenvectors
corresponding to m (m<n) largest eigenvalues are selected as the projection direction, that is to find a set of feature
* T
vectors {w1 , w2 ,..., wm } . For any sample Zij, it can be expressed as Yij (W ) Zij in the new orthogonal subspace. The
detailed PCA algorithm are as follows:
Step 1 Calculate the total scatter matrix of gait samples St using (1).
Step 2 Solve eigenvalue equation of St to obtain each principal component direction.
Step 3 Select the appropriate number of principal components as new features of the sample.
Step 4 Project sample onto the selected principal components.
2.2 Variant of PCA
It can be seen by the PCA method that selecting fewer principal components not only achieves the dimensionality reduction
of sample, but also retains the important discriminant information. In the following, in order to better hold the important
discriminant information, we use a variant of PCA method in feature extraction of gait, for short v-PCA. The detailed vPCA algorithm is as follows:
Step 1 Calculate the total scatter matrix of gait samples St using (1).
Step 2 Solve eigenvalue equation of St to obtain each principal component direction.
Step 3 Select the principal components with nonnegative eigenvalue as new features of the sample.
Step 4 Project sample onto the selected principal components.
2.3 LDA method
LDA is a feature extraction method in which the optimal projection directions are obtained by seeking the extreme of Fisher
criterion function. It is expected that the projected samples can be achieved the largest between-class scatter and the
smallest within-class scatter. Between-class scatter matrix of samples and within-class scatter matrix of samples are defined
as follows:
1 C
(2)
Sb N i ( Zi Z )( Zi Z )T ,
N i 1

Sw
where Zi

1
Ni

1 C Ni
(Zij Zi )(Zij Zi )T ,

N i 1 j 1

Ni

ij

is mean of samples of ith class, Z

j 1

1
N

(3)
C

N Z
i

is mean of all samples.

i 1

Fisher criterion function is in the following

arg max

T Sb
T S w

Volume 4, Issue 9, September 2016

(4)

Page 11

IPASJ International Journal of Computer Science (IIJCS)


A Publisher for Research Motivation ........

Volume 4, Issue 9, September 2016

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


Email: editoriijcs@ipasj.org
ISSN 2321-5992

In the LDA, ={1,2,,m} is gotten by optimizing (4). In addition, it is noted that when solving the optimization
problem (4), it is transformed into the eigenvalue equation problem of matrix, namely Sb k k Sw k , k 1, 2,, m . It can
be seen that if Sw is nonsingular matrix, the solved projection matrix is eigenvectors corresponding to first m largest
eigenvalues, where m C 1. Feature extraction algorithm of gait based on LDA is as follows:
Input: gait energy image set Z.
Output: the optimal projection direction LDA .
Step 1 Calculate between-class scatter matrix and within-class scatter matrix of gait energy image using (2) and (3),
respectively.
1
Step 2 If S w is the nonsingular matrix, then solve eigenvalues and eigenvectors of matrix S w Sb .
Step 3 Sort eigenvectors corresponding to the size of its eigenvalues in descending order, and take the first m eigenvectors
corresponding to m largest eigenvalues as projection matrix LDA .
It is known that in the LDA-based feature extraction of gait, within-class scatter matrix is nonsingular, otherwise this
method fails.

3. FEATURE EXTRACTION OF GAIT WITH VARIANTS OF LDA


As a feature extraction technique, LDA has a wide range of applications because of its simple calculation, good
classification performance and etc.. However, in the case of small sample size problem, within-class scatter matrix is
usually singular so that the traditional LDA cannot be calculated directly. Therefore, in the case of small sample size
problem, how to extract the optimal Fisher discriminant features become a hot research. In recent years, researchers have
proposed many ways to overcome the singularity of matrix.
3.1 PCA+LDA method
To overcome the singularity of within-class scatter matrix, Belhumeur et al [7] proposed a two-step feature extraction
technique, for short PCA + LDA, which firstly use PCA to reduce dimensionality, and then use the LDA method in reduced
space. The detailed algorithm is described as follows:
Input: gait energy image set Z.
Output: the optimal projection matrix opt .
Step 1 Compute eigenvalues and eigenvectors with total scatter matrix St , then select the N-C feature vectors
corresponding to N-C largest eigenvalues, which are constituted a matrix WPCA.
Step 2 In the reduced space, the projection matrix LDA
is obtained using LDA method,
namely LDA arg max

T
TWPCA
SbWPCA
.
T
T
WPCA S wWPCA
T

Step 3 Combine LDA with WPCA to obtain the optimal projection matrix opt LDAWPCA .
For above method of feature extraction, Yang et al. [10] theoretically gave the corresponding explanation and pointed out
that this method discarded a lot of useful discriminant information on carrying out PCA. In addition, Zhuang et al. [11]
also put forward their point of view with this method. They think that the useful discriminant information may include in
discarded vector space. In addition, they also pointed out that after using PCA to reduce the dimension, Sw cannot be
guaranteed to be invertible and null space of Sw can contain useful discriminant information.
3.2 DLDA method
In order to solve the small sample size problem, Chen et al. [6] proposed a new LDA-based method of feature extraction. If
Sw is singular, Sb is first projected onto the null space of Sw. Then between-class scatter matrix is computed in the projected
space and its eigenvectors corresponding to appropriate largest eigenvalues are solved as discriminant vectors. For this
method, Yu et al. [8] analyzed and researched it, and pointed out that this method did not make use of outside information
other than the null space and required the use of heuristic methods. Therefore, based on between-class scatter matrix Sb no
including discriminant information, they presented DLDA method. Firstly, it diagonalize Sb and calculate eigenvectors
corresponding to nonzero eigenvalues, then within-class scatter matrix Sw is projected onto the space which is spanned by
these eigenvectors and eigenvectors corresponding to appropriate number smallest eigenvalues with within-class scatter
matrix in the projected space are solved as discriminant vectors. The detailed feature extraction algorithm DLDA is as
follows:

Volume 4, Issue 9, September 2016

Page 12

IPASJ International Journal of Computer Science (IIJCS)


Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
Email: editoriijcs@ipasj.org
ISSN 2321-5992

A Publisher for Research Motivation ........

Volume 4, Issue 9, September 2016


Input: gait image samples Z.
Output: optimal transformation matrix DLDA .

Step 1 Calculate between-class scatter matrix Sb and within-class scatter matrix Sw of gait energy image using (2) and
(3),respectively.
T

Step 2 Diagonalize matrix Sb. Find matrix V such that V T SbV D , where V V I and D is a diagonal matrix with
1/ 2

eigenvalues sorted in decreasing order. Take V first m columns as matrix Y and calculate Y T SbY Db . Let Z YDb

then Z Sb Z I , where I is a identity matrix of order m, and m <n.


Step 3 Calculate matrix Z T S w Z and diagonalize it as Dw, i.e. U T Z T S w ZU Dw ,where U is a orthogonal matrix.
Step 4 Use available matrix U and Z to obtain the optimal transformation matrix DLDA UT ZT Dw1/2 .
It is noted that this method diagonalizes the between-class scatter matrix other than within-class scatter matrix in
traditional LDA method. Meanwhile, we also see that although this method has the fewer calculation, a lot of useful
discriminant information are lost after removing the null space of Sb .
3.3 RLDA method
In the LDA method, when the small sample size problem is serious, the performance of LDA method drop dramatically. It
mainly derives from such variance and deviation of eigenvalue. In order to solve the small sample size problem, Friedman
[12] proposed the regularized discriminant analysis, for shortly RDA , which solves these problems by introducing a
regularization term. Based on RDA and DLDA method, Lu et al. [9] proposed a new criterion function based on
regularization term, which is defined as follows

arg max

T Sb
( T Sb ) T S w

(5)

where 0 1 is the regularization parameter. At the same time, they also proved that new criterion function is equivalent
to the conventional Fisher criterion. Based on the above criterion, they proposed RLDA methods. The detailed algorithm is
as follows:
Input: gait image samples Z.
Output: optimal transformation matrix RLDA .
Step 1 Calculate between-class scatter matrix Sb, and express it as Sb b Tb , where
,
.
b [b,1 b,2,b,C] b,i (Ni / N)1/2(Zi Z)
Step 2 Determine the eigenvectors corresponding to m nonzero eigenvalues with matrix Tb b and make m eigenvectors
as matrix Em.
Step 3 Compute U m b Em and Db U mT SbU m .
Step 4 Let H U m Db1/ 2 ,then determine the eigenvectors corresponding to eigenvalues sorted in ascending order with
matrix HT SwH and make these eigenvectors as matrix P.
Step 5 Select first M M m eigenvectors from P and denote it as PM, corresponding eigenvalues are constituted diagonal
matrix DM.
1/ 2
Step 6 Compute the optimal transformation matrix RLDA HPM ( I DM ) .

4. EXPERIMENTAL RESULT AND ANALYSIS


To test the performance of different feature extraction methods in gait recognition, we choose DLDA, PCA + LDA, RLDA
and v-PCA + RLDA feature extraction methods. In experiments, we use gait database CASIA A, CASIA B and CASIA C
from Chinese Academy of Sciences to test their performance[13]. And gait image sequences with shooting angle of 90 are
used. Moreover, the original gait images are processed by means of denoising, detecting and normalizing and then
converted them to gait energy images.
First, we select 1/6,1/3,1/2,2/3 and 5/6 of gait energy images for each pedestrian as training data and the rest of the images
as testing data. The minimum distance classifier is used. The experimental results are shown in Figure 1, wherein (a) is the
experimental results on CASIA A database, (b)-(d) are experimental results with normal walking, backpacks walking and

Volume 4, Issue 9, September 2016

Page 13

IPASJ International Journal of Computer Science (IIJCS)


Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm
Email: editoriijcs@ipasj.org
ISSN 2321-5992

A Publisher for Research Motivation ........

Volume 4, Issue 9, September 2016


pedestrians wearing coat walking in CASIA B database, respectively.

c
d
Figure 1 Recognition rate with different method using different data ratio, where x-axis represents the different
proportions of training data and y-axis represents the recognition rate
As can be seen from Figure 1, with the increase of the number of training samples, the recognition rates gradually increase
for different feature extraction methods. In particular, better recognition performance is obtained when selecting 2/3 gait
energy image samples for each pedestrian. And in four different walking state for two gait database CASIA A and CASIA B,
gait recognition rate using v-PCA + RLDA feature extraction is better than those using other three feature extraction
methods.
In addition, it can be seen that v-PCA + RLDA method involves the two parameters, namely and M. In order to select
better parameters and M, for CASIA B gait database, we select gait energy image of the 1/2 and 2/3 with each pedestrian
as train set and the remaining 1/2 and 1/3 as the test set, the experimental results are shown in Figure 2, wherein (a) and (b)
are the experimental results about training sample rate of 1/2 and 2/3 with two parameters and M, respectively.

Recognition rate

Recognition rate

Figure 2 Recognition rate using different parameters and M

Volume 4, Issue 9, September 2016

Page 14

IPASJ International Journal of Computer Science (IIJCS)


A Publisher for Research Motivation ........

Volume 4, Issue 9, September 2016

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


Email: editoriijcs@ipasj.org
ISSN 2321-5992

As can be seen from Figure 2, gait recognition rate is different using different M and . In particular, when the ratio of the
number of training samples is 2/3 and appropriate parameters are selected, gait recognition rate is better achieved. It can be
seen that the algorithm in the selected fewer feature can achieve optimal recognition results. In order to compare the gait
recognition performance of different methods, which include PCA + LDA, DLDA, RLDA and v-PCA + RLDA, respectively,
experiments are conducted in CASIA A and CASIA B databases. We select 2/3 of gait energy images each pedestrian as the
training set and the remaining data as the testing set. The results are shown in Table 1. As can be seen from Table 1, v-PCA
+ RLDA basisly has the same speed compared with the other three methods, but it has increased significantly in the
recognition rate. This is mainly due to important discriminant information of null space with between-class scatter matrix in
v-PCA, which plays an important role on the recognition. Moreover, RLDA method of feature extraction increases
disturbance factors adjustment on eigenvalues variance and deviation, and further reduces their impact on the recognition
rate.
Table 1: Performance of methods using different feature extraction
Methods of feature
Gait database and
Recognition rate
Running time
extraction
walking state
81.35%
12.6
CASIA A
CASIA B
PCA+LDA
76.34%
44.2
Walking with backpack
80.50%
35.6
Walking with coat
81.35%
30.9
Normal walking
85.69%
11.7
CASIA A
CASIA B
DLDA
80.50%
43.1
Walking with backpack
83.69%
35.9
Walking with coat
83.69%
30.3
Normal walking
88.50%
12.9
CASIA A
CASIA B
RLDA
83.69%
43.9
Walking with backpack
84.25%
34.8
Walking with coat
88.50%
29.5
Normal walking
95.32%
11.5
CASIA A
CASIA B
v-PCA+RLDA
89.25%
44.2
Walking with backpack
90.36%
33.5
Walking with coat
95.32%
30.9
Normal walking
In addition, in order to test the robustness of different methods, experiments in CASIA C are conducted. The results are as
shown in Table 2. It can see that in CASIA C database, although infrared night shooting has an impact on the LDA-based
extended methods PCA + LDA, RLDA and v-PCA + RLDA, but v-PCA + RLDA is still achieved better recognition rate.
This indicates that v-PCA + RLDA method has strong robustness.
Table 2: Recognition rate in CASIA C using different methods
Method of feature
Recognition rate
Walking state
extraction
75.51%
Normal walking
65.36%
Walking quickly
PCA+LDA
62.23%
Walking slowly
78.64%
Walking with backpack
83.62%
Normal walking
82.36%
Walking quickly
RLDA
80.10%
Walking slowly
75.51%
Walking with backpack
92.31%
Normal walking
89.91%
Walking quickly
v-PCA+RLDA
90.26%
Walking slowly
85.69%
Walking with backpack

Volume 4, Issue 9, September 2016

Page 15

IPASJ International Journal of Computer Science (IIJCS)


A Publisher for Research Motivation ........

Volume 4, Issue 9, September 2016

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


Email: editoriijcs@ipasj.org
ISSN 2321-5992

5. CONCLUSION
In this paper, we analyze LDA method and its variants. Combined with gait recognition problem, a variant of PCA is given
by considering some eigenvectors corresponding to nonnegative eigenvalue with total scatter matrix. Then, gait features are
obtained using RLDA method in reduced space. This method better solves the impact of variance and deviation on
recognition rate while overcoming the difficulties caused by the small sample size problem. In experiments, we select gait
databases from Chinese Academy of Sciences and minimum distance classifier to study different methods of feature
extraction and their recognition performance.

REFERENCE
[1] H. P. Mohan Kumar, H. S. Nagendraswamy, LBP for Gait Recognition: A Symbolic Approach based on GEI Plus
RBL of GEI, In Proceedings of the International Conference on Electronics and Communication Systems (ICECS),
pp.1-5,2014.
[2] B. Guo, M. S. Nixon, Gait Feature Subset Selection by Mutual Information, Systems, Man and Cybernetics, Part A:
Systems and Humans, IEEE Transactions on, 39(1),pp.36-46,2009.
[3] Y. Guan, C. T. Li, Y. Hu, Random Subspace Method for Gait Recognition, In Proceedings of the IEEE International
Conference on Multimedia and Expo Workshops (ICMEW), pp.284-289,2012.
[4] R. A. Fisher, The Use of Multiple Measurement in Taxonomic Problems,Annals of Eugenics, 7(2), pp.179-188,1936.
[5] R. Lotlikar, R. Kothari, Fractional-Step Dimensionality Reduction, IEEE Transactions on Pattern Analysis and
Machine Intelligence, 22(6),pp. 623-627,2000.
[6] L. F. Chen, H. Y. M. Liao, M. T. Ko, et al., A New LDA-based Face Recognition System Which Can Solve the Small
Sample Size Problem, Pattern Recognition, 33(10),pp. 1713-1726,2000.
[7] P. N. Belhumeur, J. P. Hespanha and D. Kriegman, Eigenfaces vs. Fisherfaces: Recognition Using Class Specific
Linear Projection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 19(7),pp.711-720,1997.
[8] H. Yu, J. Yang, A Direct LDA Algorithm for High-Dimensional Datawith Application to Face Recognition,
Pattern recognition, 34(10),pp. 2067-2070,2001.
[9] J. Lu, K. N. Plataniotis and A. N. Venetsanopoulos, Regularization Studies of Linear Discriminant Analysis in Small
Sample Size Scenarios with Application to Face Recognition,Pattern Recognition Letters, 26(2),pp.181-191,2005.
[10] J. Yang, J. Y. Yang, Why can LDA Be Performed in PCA Transformed Space?, Pattern Recognition, 36(2),pp.563566,2003.
[11] X. S. Zhuang, D. Q. Dai, Improved Discriminate Analysis for High-Dimensional Data and Its Application to Face
Recognition, Pattern Recognition, 40(5),pp.1570-1578,2007.
[12] J. H.Friedman, Regularized Discriminant Analysis, Journal of the American Statistical Association, 84,pp.165
175,1989.
[13] http://www.cbsr.ia.ac.cn/china/Gait%20Databases%20CH.asp.

AUTHOR
Kai Li received the B.S. and M.S. degrees in mathematics department electrical engineering department
from Hebei University, Baoding, China, in 1982 and 1992,respectively. He received the Ph.D. degree from
Beijing Jiaotong University, Beijing, China, in 2001.He is currently a Professor in college of computer
science and technology, Hebei University. His current research interests include machine learning, data
mining, computational intelligence, and pattern recognition
Guochao Wang received the bachelor degree in computer science and technology from Industrial and Commercial College
Hebei University, Baoding, Hebei province, China, in 2009, and he is currently pursuing the M.E. degree in the computer
science and technology, Hebei University, Baoding, Hebei province, China. His research interests include machine learning,
data mining, and pattern recognition.

Volume 4, Issue 9, September 2016

Page 16

You might also like