You are on page 1of 1

This article has been accepted for inclusion in a future issue of this journal.

Content is final as presented, with the exception of pagination.

ZHANG et al.: N-2-DPCA FOR EXTRACTING FEATURES FROM IMAGES 5

TABLE II
Algorithm 1 Iteratively Reweighted Method for N-2-DPCA
O BJECTIVE F UNCTION VALUES AT D IFFERENT P ROJECTION M ATRICES
Input: Training data A1 , . . . , As , projection dimensional r .
1: Initialize: W0i=1:s = I, i=1:s
0 = 1, k = 0, K = 4;
s  T  k  T k 
2: Compute D: D k+1
= i=1 Ai Wi Wi Ai ;
3: Update P: P k+1 = [d1 , . . . , dr ], where di is orthonormal
eigenvectors of Dk+1 corresponding to the i -th largest
eigenvalues;
IV. N UCLEAR N ORM -BASED B ILATERAL 2-DPCA
4: Compute Xi and i :
Xik+1 = Ai A i , P  (P  ) ,
k+1 k+1 T
N-2-DPCA adopts a unilateral projection (right multiplica-
i = min i , K Xik+1 ;
k+1 k
   T 1/4 tion) scheme, which needs more coefficients for represent-
k+1
5: Update Wi : Wi = Xik+1 k+1 Xik+1 k+1 ; ing an image than PCA. As an extension of N-2-DPCA,
i i
6: If i = 0, go to 7; otherwise go to 2. N-B2-DPCA is developed, where left and right projection
7: Output: Optimal projection matrix P k+1 . directions are calculated simultaneously. N-B2-DPCA can
represent an image with much less coefficients than
N-2-DPCA. The bilateral N-2-DPCA is formulated as follows:


s
 
min Ai QQT Ai PPT  s.t. P T P = Ir , QT Q = It .
P,Q
i=1
(18)

Equation (18) is a generalization of (6). In (18), P R nr


and Q R t m are the left and right multiplying projection
matrices, respectively.
Fig. 4. Objective function value versus iteration times. Here, the training We update variables P and Q alternatively since there is no
data are formed by 64 samples of subject 1 in the Extended Yale B database, close-form solution for the problem (18).
and each sample is resized to 4842. The number of projection axes is r = 8.
Given Q = Qk , update P by


s
 
C. Connections to Existing 2-D Methods min Ai QQT Ai PPT  s.t. P T P = Ir . (19)
P
Compared with 2-DPCA, the projection i=1
saxes ofT N-2-DPCA
are eigenvectors of the matrix D = i=1 (Ai Wi Wi Ai ),
T

which can be viewed as weighted image covariance matrix G Given P = Pk+1, update Q by
in 2-DPCA. In particular, each image sample Ai is weighted
by the corresponding matrix Wi . Meanwhile, the weighting 
s
 
min Ai QQT Ai PPT  s.t. QT Q = It . (20)
matrix Wi is updated after completing the iteration each time. Q
i=1
In the N-2-DPCA algorithm, we set the initial Wi as the
identity matrix. That is, the 2-DPCA solution provides an
Similar to the way of solving N-2-DPCA, we use the
initial solution for N-2-DPCA.
iteratively reweighted method to solve (19) and (20).
The model of the Schatten1-norm PCA [19] is


s
max ||Ai P|| s.t. P T P = Ir . (17) A. Algorithms for Solving (19) and (20)
P
i=1
For (19), the procedure consists of the following iterations.
Its criterion function is different from our method. More-
over, the solution of (17) is obtained on the condition that 1) Given Wi = Wik , update P by
P T P = PPT = I, which is a very strong constraint. In general,
a projection matrix P has only a small number of columns. 
s
  
Pk+1 = arg min Wi Ai QQT Ai PPT 2
So, the algorithm of Schatten1-norm PCA is not an exact P F
i=1
algorithm for calculating the column-rank-deficient matrix P.
In contrast, our N-2-DPCA algorithm does not need any s.t. PP = Ir .
T
(21)
additional condition. It is an exact algorithm.
Table II shows the nuclear norm value of the optimal projec- 2) Given P = Pk+1 , update Wi by
tion matrices of 2-DPCA, L1 -2-DPCA, Schatten 1-norm PCA,
 
and N-2-DPCA (note that we use same experiment setting as Wik+1 = Ai QQT Ai PPT
that for Fig. 4). We can see that our model obtain minimal  T  1
nuclear norm value among all methods. Ai QQT Ai PPT 4 . (22)

You might also like