You are on page 1of 17

Vandermonde systems on Gauss-Lobatto

Chebyshev nodes
A. Eisinberg, G. Fedele
Dip. Elettronica Informatica e Sistemistica,
Universit` a degli Studi della Calabria,
87036, Rende (Cs), Italy
Abstract
This paper deals with Vandermonde matrices V
n
whose nodes are the Gauss-Lobatto
Chebyshev nodes, also called extrema Chebyshev nodes. We give an analytic fac-
torization and explicit formula for the entries of their inverse, and explore its com-
putational issues. We also give asymptotic estimates of the Frobenius norm of both
V
n
and its inverse and present an explicit formula for the determinant of V
n
.
Key words: Vandermonde matrices, Polynomial interpolation, Conditioning
1 Introduction
Vandermonde matrices dened by

V
n
(i, j) = x
i1
j
, i, j = 1, 2..., n; x
j
C are
still a topical subject in matrix theory and numerical analysis. The interest
arises as they occur in applications, for example in polynomial and exponen-
tial interpolation, and because they are ill-conditioned, at least for positive
or symmetric real nodes [1]. The primal system

V
n
a = b represents a mo-
ment problem, which arises, for example, when determining the weights for
a quadrature rule, while the matrix V
n
=

V
T
n
involved in the dual system
V
n
c = f plays an important role in polynomial approximation and interpo-
lation problems [2,3]. The special structure of V
n
allows us to use ad hoc
algorithms that require O(n
2
) elementary operations for solving a Vander-
monde system. The most celebrated of them is the one by Bjorck and Pereyra
[4]; these algorithms frequently produce surprisingly accurate solution, even
when V
n
is ill-conditioned [2]. Bounds or estimates of the norm of both V
n
and V
1
n
are also interesting, for example to investigate the condition of the
polynomial interpolation problem. Answer to these problems have been given
rst for special congurations of the nodes and recently for general ones [5].
Preprint submitted to Appl. Math. and Comp. 12 March 2005
Polynomial interpolation on several set of nodes has received much attention
over the past decade [6]. Theoretically, any discretization grid can be used to
construct the interpolation polynomial. However, the interpolated solution be-
tween discretization points are accurate only if the individual building blocks
behave well between points. Lagrangian polynomials with a uniform grid suf-
fer for the eect of the Runge phenomenon: small data near the center of the
interval are associated with wild oscillations in the interpolant, on the order
2
n
times bigger, near the edges of the interval, [7][8]. The best choice is to
use nodes that are clustered near the edges of the interval with an asymptotic
density proportional to (1 x
2
)
1/2
as n , [9]. The family of Chebyshev
points, obtained by projecting equally spaced points on the unit circle down to
the unit interval [1, 1] have such density properties. The classical Chebyshev
grids are [10]:
Chebyshev nodes
T
1
=
_
x
k
= cos
_
2k 1
2n

_
, k = 1, 2, ..., n
_
(1)
Extended Chebyshev nodes
T
2
=
_
_
_
x
k
=
cos
_
2k1
2n

_
cos
_

2n
_
, k = 1, 2, ..., n
_
_
_
(2)
Gauss-Lobatto Chebyshev nodes (extrema)
T
3
=
_
x
k
= cos
_
k 1
n 1

_
, k = 1, 2, ..., n
_
(3)
In [11] it is proved that interpolation on the Chebyshev polynomial extrema
minimizes the diameter of the set of the vectors of the coecients of all possi-
ble polynomials interpolating the perturbed data. Although the set of Gauss-
Lobatto Chebyshev nodes failed to be a good approximation to the optimal
interpolation set, such set is of considerable interest since the norm of corre-
sponding interpolation operator P
n
(T
3
) is less than the norm of the operator
P
n
(T
1
) induced by interpolation at the Chebyshev roots [12].
This paper deals with Vandermonde matrices on Gauss-Lobatto Chebyshev
nodes. Through the paper we present a factorization of the inverse of such ma-
trix and derive an algorithm for solving primal and dual system. We also give
asymptotic estimates of the Frobenius norm of both V
n
and its inverse and an
explicit formula for det(V
n
). A point of interest in this matrix is the (relative)
moderate growth, versus n, of the condition number
2
(V
n
), [13][3]. Figure 1
shows the
2
comparison between the Vandermonde matrix on the Chebyshev
nodes (V
n
(T
1
)), Chebyshev extesa nodes (V
n
(T
2
)) and Gauss-Lobatto Cheby-
shev nodes (V
n
(T
3
)).
2
0 10 20 30 40 50 60 70 80 90 100
0.7
0.75
0.8
0.85
0.9
0.95
1
n

2
(V
n
(T
3
)/
2
(V
n
(T
1
))

2
(V
n
(T
3
)/
2
(V
n
(T
2
))
Fig. 1. Plot of the ratios

2
(V
n
(T
3
))

2
(V
n
(T
1
))
and

2
(V
n
(T
3
))

2
(V
n
(T
2
))
.
2 Preliminaries
Let V
n
be the Vandermonde matrix dened on the set of n distinct nodes
X
n
= x
1
, ..., x
n
:
V
n
(i, j) = x
j1
i
, i, j = 1, ..., n (4)
In [14] the authors show that the inverse of the Vandermonde matrix V
n
,
namely W
n
, is:
W
n
(i, j) = (n, j)(n, i, j), i, j = 1, ..., n (5)
where the function (n, i, j) is dened as:
(n, i, j) = (1)
i+j
ni

r=0
(1)
r
x
r
j
(n, n i r), i, j = 1, ..., n (6)
and the functions (m, s) and (m, s) are recursively dened as follows:
3
_

_
(m, s) = (m 1, s) + x
m
(m 1, s 1) , m, s integer
(m, 0) = 1, m = 0, 1, ...
(s < 0) (m < 0) (s > m) (m, s) = 0
(7)
_

_
(m+ 1, s) =
(m,s)
x
m+1
x
s
, m integer; s = 1, ..., m
(m+ 1, m + 1) =
m

k=1
1
x
m+1
x
k
(2, 1) = (2, 2) =
1
x
2
x
1
(8)
By (5), taking into account the (6), W
n
can be factorized as:
W
n
= S P F (9)
where:
S(i, j) = (1)
i+j+1
(n, n + 1 i j), i = 1, ..., n;
j = 1, ..., n + 1 i
(10)
P(i, j) = (1)
j
x
i1
j
, i, j = 1, ..., n (11)
F = diag (n, i)
i=1,2,...,n
(12)
Note that:
S
m
(x) =
m

i=1
(x x
i
) =
m

r=0
(1)
r
(m, r) x
mr
(13)
S

m
(x
k
) = (1)
m+k
1
(m, k)
, k = 1, ..., m (14)
4
3 Main results
We start by noting that, for some sets of interpolation nodes, explicit expres-
sion for and may be found in [15]. We consider the set of Gauss-Lobatto
Chebyshev nodes (X
n
= T
3
) and give the proof of some properties useful in
the sequel.
Lemma 1
_

_
(n, 2s) = (1)
s 1
2
n2

n
2

q=1
_
n1
2q1
__
q
s
_
, s = 0, ...,
n
2
|
(n, 2s + 1) = 0, s = 0, ...,
n
2
|
(15)
where notations | and | denote the oor and ceiling functions, respectively
[16].
Proof. It is easy to show that (13) can be rewritten as:
S
n
(x) =
1
2
n2
(x 1)(x + 1)U
n2
(x) (16)
where
U
m
(x) =
sin [(m + 1) arccos(x)]
sin [arccos(x)]
is the m-order Chebyshev polynomial of the second kind.
But [17]:
U
n2
(x) =

n
2

q=1
(1)
q+1
_
n 1
2q 1
_
x
n2q
(1 x
2
)
q1
(17)
by substituting the (17) in (16) one has:
S
n
(x) =

n
2

q=1
q

s=0
(1)
s
_
n 1
2q 1
__
q
s
_
x
n2s
(18)
and, therefore, the (15) follows.
5
Lemma 2
S

n
(x
k
) =
n 1
2
n2
_
(1)
n+k
(1)
n

k,1
+
k,n
_
(19)
Proof. The (16) can be rewritten as:
S
n
(x) =
1
2
n2
(x 1)(x + 1)
sin [(n + 1) arccos(x)]
sin [arccos(x)]
therefore, by standard algebraic manipulations:
S

n
(x
k
) =
1
2
n2
(n 1) cos [(n k)]
1
2
n2
cos
_
k1
n1

_
_
1 cos
2
_
k1
n1

_
sin [(n k)]
Noting that:
lim
k1
cos
_
k1
n1

_
_
1 cos
2
_
k1
n1

_
sin [(n k)] = (1)
n
(n 1)
lim
kn
cos
_
k1
n1

_
_
1 cos
2
_
k1
n1

_
sin [(n k)] = (n 1)
the (19) follows.
By substituting the (19) in (14), one has:
(n, k) =
_

_
2
n3
n1
k = 1, n
2
n2
n1
k = 2, ..., n 1
(20)
Lemma 3 An alternative formulation of (15) is:
(n, 2s) = (1)
s
1
2
2s
_
n s
s
_
n
2
n 2s
(n s 1)(n s)
, s = 0, 1, ...,
_
n
2
_
(21)
6
Proof. By the recurrence properties of the second-kind Chebyshev polynomials
[18], one has:
S
n
(x) xS
n1
(x) +
1
4
S
n2
(x) = 0
therefore

n
2
|

s=0
(n, 2s)x
n2s
x

n1
2
|

s=0
(n1, 2s)x
n2s1
+
1
4

n2
2
|

s=0
(n2, 2s)x
n2s2
= 0
(22)
must holds. The (22) can be proved by standard algebraic manipulations when
n is both odd and even.
By rearranging (10), (11) and (12), one has:
_

_
S(i, j) = (1)
i
(n, n + 1 i j), i = 1, ..., n;
j = 1, ..., n + 1 i
P(i, j) = cos
_
j1
n1

_
i1
, i, j = 1, ..., n
F(i, i) = (1)
i
(n, i) i = 1, ..., n
(23)
Following the same line in [19], the matrix P can be factorized as:
P = D U H (24)
where:
_

_
D(i, i) =
1
2
i2
, i = 2, ..., n
D(1, 1) = 1
(25)
7
_

_
U(2i 1, 1) =
_
2i3
i1
_
, i = 1, ...,
_
n
2
_
U(2i, 2j) =
_
2i1
ij
_
, j = 1, ...,
_
n
2
_
, i = j, ...,
_
n
2
_
U(2i 1, 2j 1) =
_
2i2
ij
_
, j = 2, ...,
_
n
2
_
, i = j, ...,
_
n
2
_
(26)
H(i, j) = cos
_
(i 1)(j 1)
n 1

_
, i = 1, ..., n, j = 1, ..., n (27)
If one denes the matrix Q as:
Q(i, j) = 2
ni1
[S D U] (i, j), i, j = 1, ..., n (28)
the (9) becomes:
W
n
=
1
n 1
K Q H F (29)
where
K = diag2
i1

i=1,2,...,n
(30)
_

_
F(1, 1) =
1
2
F(i, i) = (1)
i
, i = 2, ..., n 1
F(n, n) = (1)
n 1
2
(31)
We present here an ecient scheme for the computation Q. It can be shown
that Q can be build by the following equalities:
8
_

_
Q(1, n 2) = 2
Q(i, n + 1 i) = (1)
i
i = 1, 2, ..., n
Q(1, n 2j 2) = Q(1, n 2j), j = 1, 2, ...,
n4
2
|
Q(i, n + 1 i 2j) = Q(i, n + 3 i 2j) Q(i 1, n + 2 i 2j), i = 2, 3, ..., n;
j = 1, 2, ..., j

Q(i, 1) = Q(i, 1)/2, i = 1, 2, ..., n


(32)
where
j

=
_

ni
2
| n even

n1i
2
| n odd
4 The Frobenius norm of V
n
and W
n
Proposition 1 The Frobenius norm of V
n
is
|V
n
|
F
=

_
n +
n 1
2
2n3
+
2

(n 1)

_
n +
1
2
_
(n)
(33)
where (x) is the gamma function [20].
Proof.
|V
n
|
2
F
=
n

i=1
n

s=1
_
cos
_
s 1
n 1

__
2i2
(34)
But
9
_
cos
_
s 1
n 1

__
2i2
=
1
2
2i2
_
2i 2
i 1
_
+
1
2
2i2
i2

k=0
2
_
2i 2
k
_
cos
_
2(i 1 k)(s 1)
n 1

_
(35)
then (34) becomes:
|V
n
|
2
F
=
n

i=1
n

s=1
1
2
2i2
_
2i 2
i 1
_
+
n

i=1
n

s=1
i2

k=0
2
2
2i2
_
2i 2
k
_
cos
_
2(i 1 k)(s 1)
n 1

_
(36)
By using the identity
n

i=1
n

s=1
1
2
2i2
_
2i 2
i 1
_
=
2

_
n +
1
2
_
(n)
(37)
and by standard algebraic manipulations the (33) follows.
Proposition 2 The Frobenius norm of W
n
is given by
|W
n
|
2
F
=
1
2(n 1)
+
2
2n4
n 1
_

1
(n) +
1
n 1

2
(n)
_
(38)
where

1
(n) =
n

k=1

nk
2

r=0

nk
2

s=0
(1)
n+k+r+s
_

1
2
n k r s
_
(2r)(2s) (39)
and

2
(n) =
n

k=1

nk
2

r=0

nk
2

s=0
1
2
(2r)(2s) (40)
Proof. The (38) follows from standard algebraic manipulations.
Taking into account only the term
1
(n) in (38) and using the facts
10
20 30 40 50 60 70 80 90 100
1
2
3
4
5
6
7
8
9
10
x 10
3
n
Fig. 2. Relative error estimating [[W
n
[[
F
.
n

k=1

nk
2

r=0

nk
2

s=0
[] =

n1
2

r=0

n1
2

s=0
n2 max (r,s)

k=1
[]
q

s=0
_
p
3
2
s 1
__
q
s
_
=
_
p +q
3
2
q 1
_
we give the following conjecture.
Conjecture 1
|W
n
|
F

_
2
n 1

n
2
1

p=1

n
2
1

q=1
_
n 1
2p 1
__
n 1
2q 1
__
p + q
3
2
q 1
_
, n
(41)
Figure 2 shows the accuracy of the estimate of the Frobenius norm of W
n
in
term of relative error for n in the interval [20, 100] by Eq. 41.
5 The determinant of V
n
The next proposition gives the value of the determinant of V
n
.
11
Proposition 3
det(V
n
) = 2

(n 1)
n
2
n(n2)
(42)
Proof. By the denition of the Vandermonde determinant we have
det(V
n
) =

1i<jn
_
cos
_
i 1
n 1

_
cos
_
j 1
n 1

__
= 2
n(n1)
2

1i<jn
sin
_
i + j 2
2n 2

_
sin
_
j i
2n 2

_
and, simply rearranging the terms we can write
det(V
n
) = 2
n(n1)
2

n
2

k=1
sin
_
2k 1
2n 2

_
n+1

n
2
1

k=1
sin
_
2k
2n 2

_
n
Finally [17]

n
2

k=1
sin
_
2k 1
2n 2

_
n+1
= 2
2+nn
2
2
,

n
2
1

k=1
sin
_
2k
2n 2

_
n
=

(n 1)
n
2
n(n2)
which concludes the proof.
6 Numerical experiments
This section shows some numerical experiments, aimed at investigating the
accuracy of the proposed factorization. We have solved several dual systems
V
n
c = f and primal systems

V
n
a = b and have compared our results with those
obtained by the Bjorck-Pereyra algorithms. We have used package Mathemat-
ica [21] to compute the approximate solutions c and a, the exact ones (using
extended precision of 1024 signicant digits) and the errors

c
= max
1in
[ c
i
c
i
[
[c
i
[
(43)

a
= max
1in
[ a
i
a
i
[
[a
i
[
(44)
12
of both our and Bjorck-Pereyra algorithm. A set of experiments has been run,
for n = 3 10, 20, 30, 40, 50, 100. We have generated the right-hand sides f
and b with random entries uniformly distributed in the interval [1, 1]. Tables
1 and 2 shows maximum and mean value of (43) and (44) over 10000 runs, the
fraction of trials in which the proposed algorithms (EF) give equal or more
accurate result than Bjorck-Pereyra ones (BP) and also the probability that

c
and
a
is less or equal than 10nu where u = 2
53
is the unit roundo. As
to the computational cost the EF algorithms require 3n
2
+ O(n) while BP
algorithms cost 2.5n
2
+ O(n) ops. EF algorithms seem to perform better
than the Bjorck-Pereyra ones in terms of numerical accuracy and stability as
it can be seen for high value of n. Same results are obtained by computing
the approximate solutions c and a in Matlab package and then by migrating
the output in Mathematica in order to compare it with the exact one. For
Matlab code refer to Appendix A.
n BP EF s.r.
max mean max mean EF vs BP p(
c
10nu)
3 2.34-13 3.07-16 2.15-15 4.02-17 0.98 0.99
4 1.73-12 2.53-15 4.03-13 1.09-15 0.75 0.99
5 9.31-12 5.82-15 4.65-12 1.51-15 0.93 0.98
6 1.43-11 1.40-14 1.54-12 2.41-15 0.94 0.97
7 2.47-11 2.35-14 6.60-12 4.36-15 0.96 0.97
8 3.24-10 7.90-14 5.67-12 4.69-15 0.99 0.95
9 6.20-11 6.12-14 1.12-12 2.94-15 0.99 0.96
10 1.56-10 1.98-13 9.00-12 6.66-15 0.99 0.95
20 1.17-06 4.10-10 5.47-11 2.54-14 1.00 0.92
30 2.10-03 9.79-07 2.22-09 3.86-13 1.00 0.91
40 5.68+00 3.77-03 1.91-10 1.01-13 1.00 0.92
50 7.61+03 1.38+01 4.04-11 9.49-14 1.00 0.90
100 8.52+20 1.69+18 1.68-09 7.45-13 1.00 0.88
Table 1. Dual problem. Maximum and mean value of
c
. Success rate of EF algorithm over 10000 runs.
13
n BP EF s.r.
max mean max mean EF vs BP p(
a
10nu)
3 1.70-13 2.90-16 5.46-13 3.26-16 0.79 0.99
4 1.06-10 1.33-14 3.25-11 3.99-15 0.74 0.98
5 3.96-11 8.87-15 7.94-13 1.32-15 0.86 0.97
6 6.69-11 2.68-14 3.68-12 2.57-15 0.91 0.97
7 2.93-11 1.66-14 4.69-12 2.91-15 0.95 0.97
8 6.00-11 3.52-14 2.48-12 3.15-15 0.98 0.96
9 9.70-11 4.33-14 6.00-12 3.16-15 0.96 0.96
10 8.44-11 7.77-14 3.06-11 7.37-15 0.98 0.95
20 1.12-08 3.25-12 4.49-11 1.98-14 1.00 0.94
30 1.89-07 9.62-11 2.43-11 2.81-14 1.00 0.93
40 1.22-05 4.26-09 2.13-10 4.33-14 1.00 0.95
50 1.52-05 3.18-08 1.84-11 2.56-14 1.00 0.94
100 3.88+02 1.68-01 1.71-10 7.30-14 1.00 0.94
Table 2. Primal problem. Maximum and mean value of
a
. Success rate of EF algorithm over 10000 runs.
7 Conclusion
In this paper we derived an explicit factorization of the Vandermonde matrix
on Gauss-Lobatto Chebyshev nodes. Such factorization allows to design an
ecient algorithm to solve Vandermonde systems. The numerical experiments
indicate that our approach is more stable compared with existing Bjorck-
Pereyra algorithm. Starting from these theoretical results we are working with
a conjecture on discrete orthogonal polynomials on Gauss-Lobatto Chebyshev
nodes and its proof. The operation count and the accuracy obtained in the
experiments on least-squares problems seems to be very competitive.
Appendix A - Matlab code
function c=glc(f);
n=max(size(f));
nf=floor(n/2);
f(1)=f(1)/2;
14
f(n)=f(n)/2;
for i=1:n
f(i)=(-1)^i*f(i);
end
% Matrix H
%--------------------------------------------------------
H=zeros(n);
H(1,1:nf)=ones(1,nf);
H(1:nf,1)=ones(nf,1);
if rem(n,2)==0
start=1;
else
for j=1:ceil(n/2)
H(nf+1,2*j-1)=(-1)^(j+1);
end
H(:,nf+1)=H(nf+1,:);
start=2;
end
for i=2:nf
for j=i:nf
H(i,j)=cos(rem((i-1)*(j-1),2*n-2)*pi/(n-1));
H(j,i)=H(i,j);
end
end
for j=1:nf
if rem(j,2)==0
H(nf+start:n,j)=-flipud(H(1:nf,j));
else
H(nf+start:n,j)=flipud(H(1:nf,j));
end
end
for i=1:n
if rem(i,2)==0
H(i,nf+start:n)=-fliplr(H(i,1:nf));
else
H(i,nf+start:n)=fliplr(H(i,1:nf));
end
end
%--------------------------------------------------------
% Matrix Q
%--------------------------------------------------------
Q=zeros(n);
for i=1:n
Q(i,n+1-i)=(-1)^i;
end
15
Q(1,n-2)=2;
for j=1:ceil((n-4)/2)
Q(1,n-2*j-2)=-Q(1,n-2*j);
end
for i=2:n
if rem(i,2)==0
jmax=floor((n-i)/2);
else
jmax=ceil((n-1-i)/2);
end
for j=1:jmax
Q(i,n+1-i-2*j)=-Q(i,n+3-i-2*j)-Q(i-1,n+2-i-2*j);
end
end
Q(:,1)=Q(:,1)/2;
%--------------------------------------------------------
aux=H*f;
c=zeros(n,1);
for i=1:n
for j=rem(n+i,2)+1:2:n+1-i
c(i)=c(i)+Q(i,j)*aux(j);
end
end
for i=1:n
c(i)=2^(i-1)*c(i);
end
c=c/(n-1);
References
[1] Gautschi, W., Inglese, G., Lower bounds for the condition number of
Vandermonde matrix, Numer. Math., 52 (1998), 241-250.
[2] Golub, G. H., Van Loan, C. F., Matrix Computation, third ed., Johns Hopkins
Univ. Press, Baltimore, MD, 1996.
[3] Higham, N. J., Accuracy and Stability of Numerical Algorithms, SIAM,
Philadelphia, PA, 1996.
[4] Bjorck, A., Pereyra, V., Solution of Vandermonde systems of linear equations,
Math. of Computation, 24 (1970), 893-903.
[5] Tyrtyshnikov, E., How bad are Hankel matrices, Numer. Math., 67 (1994), 261-
269.
16
[6] Meijering, E., A Chronology of Interpolation: From Ancient Astronomy to
Modern Signal and Image Processing, Proc. of IEEE, 90 (2002), 319-342.
[7] Bjorck, A., Dahlquist, G., Numerical Methods, Prentice-Hall, Englewood Clis,
NJ, 1974.
[8] Henrici, P., Essentials of Numerical Analysis, Wiley, New York, 1982.
[9] Berrut, J. P., Trefethen, L. N., Barycentric Lagrange interpolation, SIAM Rev.
46(2004), 501-517.
[10] Brutman, L., Lebesgue functions for polynomial interpolation - a survey, Annals
of Numerical Mathematics, 4 (1997), 111-127.
[11] Belforte, G., Gay, P., Monegato, G., Some new properties of Chebyshev
polynomials, J. Comput. Appl. Math., 117 (2000), 175-181.
[12] Brutman, L., A note on polynomial interpolation at the Chebyshev extrema
nodes, Journal of Approx. Theory, 42 (1984), 283-292.
[13] Gautschi, W., Norms estimates for inverses of Vandermonde matrices, Numer.
Math., 23 (1974), 337-347.
[14] Eisinberg, A., Picardi, C., On the inversion of Vandermonde matrix, Proc. of
the 8th Triennial IFAC World Congress, Kyoto, Japan, 1981.
[15] Eisinberg, A., Fedele, G., Polynomial interpolation and related algorithms,
Twelfth International Colloquium on Num. Anal. and Computer Science with
Appl., Plovdiv, 2003.
[16] Knuth, D. E., The Art of Computer Programming, vol. 1, second ed., Addison-
Wesley, Reading, MA, 1973.
[17] Gradshteyn, I. S., Ryzhik, I. M., Table of Integrals, Series and Products, third
ed., Academic Press, New York, 1965.
[18] Rivlin, T. J., The Chebyshev Polynomials, John Wiley & Sons, New York, 1974.
[19] Eisinberg, A., Franze, G., Salerno, N., Rectangular Vandermonde matrices on
Chebyshev nodes, Linear Algebra Appl., 338 (2001), 27-36.
[20] Gatteschi, L., Funzioni Speciali, UTET, 1973.
[21] Wolfram, S., Mathematica: a System for Doing Mathematics by Computers,
Second. ed., Addison-Wesley, 1991.
17

You might also like