Professional Documents
Culture Documents
leads to another selfadjoint eigenequation. Thus, the entangled systems in Figure 3.5 give rise
to a pair of selfadjoint eigenequations.
In the eigenspaces of these selfadjoint matrices, the rectangular matrix A( ) maps eigenvectors
of the selfadjoint matrix A
A(A
r
W of A
, respectively.
A
A : u
i
V u
i
V ,
AA
: v
i
W v
i
W,
=
A : u
i
V v
i
W,
A
: v
i
W u
i
V .
are,
simultaneously, the eigenvectors and dual eigenvectors of the following selfadjoint matrices A
A and
AA
:
Au
i
=
i
v
i
,
A
v
i
=
i
u
i
Au
i
=
i
u
i
,
AA
v
i
=
i
v
i
.
, A
A and AA
to the rst and the second of the selfadjoint eigenequations to return back to the
pseudo and the dual pseudo eigenequations. We point out that the selfadjoint matrices such as A
A
and AA
we often see in linear algebra actually can exist only in the entangled system of equations
and its dual system, and acrobatic psuedo and dual pseudo eigenequations crossing over the system
and its dual system do not exist anywhere else in linear algebra.
The pseudo eigenvectors {u
i
} and dual pseudo eigenenvectors { v
i
} form respectively an r n-
and r m-dimensional semi-unitary matrices U
r
and
V
r
as
U
U
r
= I
n
, U
r
U
r
= I
r
,
V
r
V
r
= I
m
,
V
r
V
r
= I
r
.
Now, the pseudo and dual pseudo eigenequations in matrix form are
AU
r
=
V
r
r
,
A
V
r
= U
r
M
r
,
3
where
r
and
M
r
are r-dimensional diagonal matrices with
ii
and
ii
on the main diagonals(the
double-indices on them are because they are elements of matrices). Notice that the composite
selfadjoint eigenequations of A
A and AA
r
and using
the semi-unitarity, there result
V
r
AU
r
=
r
,
U
r
A
V
r
=
M
r
.
The rectangular matrices A and A
), the termi-
nology coined by Cornelius Lanczos, a Hungarian mathematician(cf. Linear Dierential Operators,
C. Lanczos, Dover).
Multiplying the pseuo eigenequation with U
r
and the dual pseudo eigenequation with
V
r
from
the right, we obtain the fundamental decomposition of A and A
as
A = V
r
r
U
r
,
A
= U
r
M
r
V
r
.
The route of the fundamental decomposition (the dual fundamental decomposition) is not a direct
one from the domain to the codomain(the dual domain to the dual codomain) as traditionally
interpreted, but roundabout taxi-drivers routes as the fundamental rule. It is revealed that both
the fundamental decomposition of the matrix A and its diagonalization are not possible without the
systemic participation of the dual system. Hence the entanglement of the dual system.
In the n-dimensional domain V of A resides an r-dimensional pseudo eigenspace
r
of A, and
likewise in the m-dimensional codomain W of A
r
.
Then, the n- and m-dimensional vectors x V and b W are related to x
r
r
and
b
r
as
x = U
r
x
r
, b =
V
r
b
r
. Substituting these into the equation Ax = b, we obtain the solution we sought:
x
i
=
b
i
, i = 1, 2, , r.
This corresponds to the least squares solution, but is an r-dimensional vector solution. Outside the
matrix
r
, the double indexing on
ii
in
r
is dropped. The solution can be expressed in V and
W
spaces if we so wish.
(2). The Fourier series expansion of continuous or even piece-wise continuous functions has
been with us since 1807. But I discovered that mathematically, it cannot be a series expansion,
but instead a well-posed system of an innite-dimensional linear algebraic equations in which the
Fourier coecients are components of an unknown vector in the domain space and the function
to be expanded in series is a denumerably continuous pre-assigned non-homogeneous vector in
the codomain space, which are in fact dual to each other, and the Fourier matrix maps the domain
to its codomain in one-to-one. The algebraic structure leading to the solution here is distinct and
unique: The inversion of this innite-dimensional algebraic system of equations does not rely on the
4
principal-axis bases, but instead on the biorthonormality of the entries of the Fourier matrix, namely
an innite set of sinusoidal scalar kernel functions, and their corresponding members of the adjoint
Fourier matrix. The biorthonormality property of these sinusoidal functions result in destructive
interferences for all entries and leave only the diagonal entries to interfere constructively leading
to Dirichlet kernels which have the limit values of delta functions. As a result, the inner product
between the adjoint Fourier matrix and the Fourier matrix yield a diagonal matrix whose diagonal
entries are Dirichlet kernels, which have the delta functions as their limits. After integration of these
delta functions, the innite-dimensional matrix turns into an identity matrix, thereby ensuring the
unique solution of the innite-dimensional algebraic equations. In this sense, this Fourier algebraic
problem is unique, dispensing with the need for the change of bases. The domain and the codomain
spaces are are dual to each other. We repeat that there is no change of bases involved for inversion
of the innite-dimensional system of equations.
Let us consider an n-dimensional equation for illustration. To begin with, the nite intervals in the
domain and the codomain are made of denumerable rational numbers. In contrast, the real numbers
are not denumerable as in the case of the Fourier integral transformation. Thus, the Fouriers works
are composed of two dierent kinds of maps dened on two dierent sets of numbers: One is on the
nite interval of denumerable rational numbers for the system of linear algebraic map for continuous
functions, the other is on an innite interval of non-denumerable real numbers for the integral map
for square-integral functions.
First, a given continuous function over a nite interval [0, l]. The unknown vector c(k) is de-
ned over a denumerable nite interval [k
1
, k
n
], k
m
= 2m/l as {c
j
(k
j
)} in the domain space. The
problem is to nd this unknown vector in the domain space, hence an inverse problem or an innite-
dimensional algebraic equations. Let the variable x in the codomain be a distance vector(with a
physical dimension of length). Then, the variable k in the domain must represent a wavenumber
vector(with a physical dimension of inverse length) if xk is to be used as the dimensionless indepen-
dent variable of sinusoidal functions in the matrix as Fourier proposed. In other words, xk must be
a linear functional.
At each point x
i
in the codomain, the ith equation is given by
f
i
(x
i
) =
1
l
n
j=1
i,j
(x
i
|k
j
)c
j
(k
j
),
i,j
= e
i(xikj)
.
Here (
i,j
(x
i
|k
j
)) is the n n-dimensional Fourier matrix, which maps the domain in one-to-one
into the codomain on the strength of the biorthonormality of the scalar kernels of the Fourier matrix
with those of its adjoint Fourier matrix, which will be dened shortly. This biorthonormality was
proved by Lejeune Dirichlet, a German mathematician. The matrix form of this algebraic equation
is
f
1
(x
1
)
f
2
(x
2
)
.
.
.
f
n
(x
n
)
=
1
e
ix1k1
e
ix1k2
e
ix1kn
e
ix2k1
e
ix2k2
e
ix2kn
.
.
.
e
ixnk1
e
ixnk2
e
ixnkn
c
1
(k
1
)
c
2
(k
2
)
.
.
.
c
n
(k
n
)
.
This is essentially what Fourier did some two hundred years ago, but was only misled to declare it
as an expansion and every ones accepted blindly the Fouriers declaration until now. The so-called
Fourier coecients in the expansion theory is the solution of the above equation (the vector in the
5
column matrix on the right side), which takes the form
c
j
(k
j
) =
l
0
dxf(x)e
i<kj|xi>
, j = 1, 2, , n
where the denumerably continuous points on which f(x) is dened is expressed at denumerably con-
tinuous points so that the summation sign may be replaced with an integral sign, and e
i<kj|xi>
are
the elements of the adjoint Fourier matrix, which is obtained by interchanging the rows and columns
of the Fourier matrix and complex-conjugationog the entries. This comes about by multiplying the
both sides of the given equation withe adjoint Fourier matrix and replacing the summation sign
with an integral sign, relying on the continuous denumerability of the interval [0, l] on which f(x)
is dened. As Dirichlet established, each sinusoidal function e
i(xikj)
and its adjoint e
i<ki|xj>
are
biorthonormal yielding a Dirac delta function, which uponintegration turns into a unity. Thus, the
right side becomes c(k), the solution we seek. f(x) under the integral sign is a denumerably contin-
uous function given by an n-dimensional algebraic equations in which the Fourier matrix maps the
unknown vector {c
j
(k
j
))} to it.
We now see that Fourier proposed, without realizing it, a system of linear algebraic equations
using biorthonormal sinusoidal functions, and oered its solution at the same time! The objection
which Fouriers contemporary mathematicians raised in 1807 melts away as the mapping from the
domain space to its codomain space. While Fourier could not explain to his contemporary mathe-
maticians how his proposed method of expansion worked, Lejeune Dirichlet supplied an answer to
that question some more than ten years later.
Incidentally, the entries of the Fourier matrix, which is unique in the realm of the mathematical
sciences, may be regarded as made of innite numbers of eigensolutions of the Sturm-Liouville
homogeneous boundary value problem or an interior resonance problem. We are awed that the
mathematical structure which Fourier discovered in 1807 describes countless numbers of natural
phenomema as well as man-made information technology in one fell sweep via the single notion
of the constructive-destructive interference of pure sinusoidal waves, which even extends deeply
into the mysterious microscopic quantum world where elementary particles are waves at the same
time, thereby bringing in the unavoidable notions of the Heisenbergs uncertainty principle and the
Schrodingers entanglement, Schronkung.