Professional Documents
Culture Documents
back to home
Contents
1
REAL ANALYSIS
1.1
CALCULUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LOGIC
3.1
3.2
3.3
3.4
3.5
3.6
COVARIANCE, CORRELATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.7
GAUSSIAN DISTRIBUTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.8
LOG-NORMAL DISTRIBUTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.9
EXPONENTIAL DISTRIBUTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
FUNCTIONAL ANALYSIS
4.1
CONVOLUTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2
CROSS-CORRELATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.3
AUTO-CORRELATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.4
COVARIANCE, CORRELATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
12
REAL ANALYSIS
1.1
CALCULUS
MULTIPLE INTEGRALS The tricky part is the limits of the multiple integral, a good-to-remember is that the outer
most integral is from point to point, then it is from a curve to a cruve, then it is from a surface to a surface, etc. In
this way, the result will be a number. The corresponding curve and surfaces can be determined from the graph or
analytically..
Remark: Complex #
1
;
| 1z | = |z|
Dot/cross product btw complex vectors are undefined, while algebraic product and division are;
Defn: Frobenius Companion Matrix
The companion matrix is defined for monic polynomial, or
p(t ) = s n + cn1 s n1 + ... + c2 s 2 + c1 s + c0
as:
c
n1
1
0
0
cn2
0
1
0
0
0
cn3
0
0
1
0
0
... c1
...
0
...
0
...
0
...
0
...
1
c0
0
0
V
R
V
R
~ f dV
f d ~S =
V
Z
t
1
cos
Ai (x) =
+ xt d t
0
3
3
Z
t
t
1
exp + x t + sin
+ xt
dt
B i (x) =
0
3
3
Ib
Ia
f ()g ()d
LOGIC
L AW OF EXCLUDED MIDDLE tertium non datur/ principium tertii exclusi. Bertrand Russell asserts a distinction between
the "law of excluded middle" and the "law of noncontradiction". In The Problems of Philosophy, he cites three "Laws of
Thought" as more or less "self-evident" or "a priori" in the sense of Aristotle:
1. Law of identity: "Whatever is, is."
2. Law of noncontradiction: "Nothing can both be and not be."
3. Law of excluded middle: "Everything must either be or not be." Note that in semantics, every declarative sentence is
either true or not true means that Law of bivalence is satisfied.
3.1
BASIC DEFINITIONS:
MEAN, VARIANCE
x fX (x)d x
f (x, y)d A
n
1X
i =1
(xi x )2
2. UNBIASED SAMPLE VARIANCE if the mean of the sample is unknown, then the simple variance underestimates
, correcting this is the BESSEL CORRECTION :
the variance by a factor n1
n
Var(x) =
n
X
n 1
i =1
(xi x )2
3. BIASED SAMPLE VARIANCE To minimize the mean squared error(MSE) between sample variance and population variance, one can use n + 1. Correcting for bias makes the MSE worse.
Var(x) =
n
X
n +1
i =1
(xi x )2
x [0, 1]
3.2
(x )n f (x)d x
The mean must be defined for the central moment to exist. (Eg Cauchy dist has no central moment)
The 0th moment is simply 0 = 1
The 1st moment is zero. 1 = 0
The 2nd moment is the variance describing how spread out the distribution is. 2 = 2 . This is indifferent to whether
its below the mean or above
The 3rd moment is related to how skew the distribution is, since far values on the positive side will produce a positive
3rd moment.
The 4th moment measures how peaked or tailed a distribution is.
3.3
BASIC DEFINITIONS:
SKEWNESS
1 =
3
3
3.4
BASIC DEFINITIONS:
KURTOSIS
KURTOSIS
=
4
4
It is the normalized 4th central moment. The more peaked the higher the kurtosis.
For a univariate normal distribution, the kurtosis is 3
EXCESS KURTOSIS is defined by
D: Laplace distribution, also known as the double exponential distribution, red curve (two straight lines in the
log-scale plot), excess kurtosis = 3
S: hyperbolic secant distribution, orange curve, excess kurtosis = 2
L: logistic distribution, green curve, excess kurtosis = 1.2
N: normal distribution, black curve (inverted parabola in the log-scale plot), excess kurtosis = 0
C: raised cosine distribution, cyan curve, excess kurtosis = -0.593762...
W: Wigner semicircle distribution, blue curve, excess kurtosis = -1
U: uniform distribution, magenta curve (shown for clarity as a rectangle in both images), excess kurtosis = -1.2.
3.5
BASIC DEFINITIONS:
JOINT PDF
the expected value functional is defined by the inner product with the joint pdf:
Z
E[ g (x, y)] = g (x, y) fX Y (x, y) d x d y
3.6
COVARIANCE, CORRELATION
COVARIANCE AND CORRELATION of two jointly distributed random variable is defined by:
Cov(x, y) = E[(X x )(Y y )]
= E(X Y ) x y
Cor(x, y) = E[(X x )(Y y )]/ x y
= Cov(x, y)/ x y
where the expected value is integration of the inner product with joint pdf.
3.7
GAUSSIAN DISTRIBUTION
Gaussian distribution or normal distribution is remarkbly usefeul due to CENTRAL LIMIT THEOREM : the distribution of sampled mean of any distribution converges to normal.
pdf of Gaussian or normal distribution n(, 2 ) is given by:
pdf
1
p e
2
3.8
(x)2
2 2
cdf
x
1
p
1
+
erf
2
2
entropy
1
2
ln(2e 2 )
LOG-NORMAL DISTRIBUTION
The log normal distribution X is one whose natural log is normaly distributed ln(X ) N (, 2 )
Likewise, if Z is a normal distribution, then e Z follows log-normal distribution.
pdf
1
p e
x 2
(ln x)2
2 2
,x >0
cdf
ln x
1
p
1
+
erf
2
2
3.9
EXPONENTIAL DISTRIBUTION
3.10
BOLTZMAN DISTRIBUTION
e +
2
2
/2
2
2
e 1 e 2+
entropy
1
2
+ 12 ln(2 2 ) +
4.1
FUNCTIONAL ANALYSIS
CONVOLUTION
Convolution
Convolution between two functions is an operator taking in two functions, the target function f and the reversed shifted
weight function g , and returning the average as a function of the shifting t :
Z
( f g ) (t ) =
f ()g (t ) d
The convolution can be thought as profile of average of f created by shifting the focus of the weight function g . The
reversion is to make the shifting procede in the positive direction.
Note that convolution commutes for f and g : f g = g f . Pf: a change of variable shows it directly.
Eg: the convolution with the dirac delta function
f (x) (x xi ) =
R3
R3
f () [x xi ]d
f () [(x xi ) ]d
= f (x xi )
Is simply the shifting of the function to another point of reference.
4.2
CROSS-CORRELATION
Cross correlation, or sliding inner product, between two functions is the inner product as a function of the shifting:
Z
f ()g (t + )d
R f g (t ) = ( f ? g )(t )
where f is the complex conjugate of f . For real functions, this is inconsequential. The cross-correlation is the average of the
target function as a function of the shfiting of the weight. The cross-correlation between two discrete functions is analogous:
R f g [n] = ( f ? g )[n]
f [m] g (n + m)
m=
4.3
AUTO-CORRELATION
Auto-correlation is simply the cross-correlation of the target function with itself. The weight is the function iteself:
R f f (t ) = ( f ? f )(t )
In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero unless
the signal is a trivial zero signal.
10
x fX (x)d x
f (x, y)d A
the expected value functional is defined by the inner product with the joint pdf:
Z
E[ g (x, y)] = g (x, y) fX Y (x, y) d x d y
4.4
COVARIANCE, CORRELATION
COVARIANCE AND CORRELATION of two jointly distributed random variable is defined by:
Cov(x, y) = E[(X x )(Y y )]
= E(X Y ) x y
Cor(x, y) = E[(X x )(Y y )]/ x y
= Cov(x, y)/ x y
where the expected value is integration of the inner product with joint pdf.
11
= hi ei
where r is the position vector, xi is the ith coordinate. hi is the arclength scaling factor.
Table 1
Cartesian(x, y, z)
Cylindrical(r, , z)
Spherical(r, , )
h1
1
1
1
h2
1
r
r sin
h3
1
1
r
SPATIAL DERIVATIVES IN DIFFERENT COORDINATES The catch here is that now the basis vectors changes with
coordinate as well! For Cartesian coordinates, basis vectors dont change:
Table 2: Cartesian derivatives of basis vector
ex
ey
ez
CYLINDRICAL
/ x
0
0
0
/ y
0
0
0
/ z
0
0
0
er
e
ez
/ r
0
0
0
/
e
er
0
/ z
0
0
0
Key observation: when determining r , observe that perturbing by is rotating the unit vector e r by , the
difference e r must have magnitude 1 and in the direction of the tangent to the pertubation curve. Hence we
have the above relation.
SPHERICAL Similar scenario for spherical note that we denote the AZIMUTHAL ANGLE as and the POLE ANGLE
with z axis as :
Table 4: Spherical derivatives of basis vector
er
e
e
/ r
0
0
0
/
sin e
sin e r cos e
cos e
/
e
0
e r
The modifications unique to the spherical coordinates are that when we perturb , we are not rotating e r , e , but their
radial component on the cut plane(constant z), this component has magnitude sin for e r and magnitude cos for
12
e along that radial. Hence when we perturb , we are rotating these less-than-unit vectors, hence the magitude of
the difference e r , will be sin in the direction of e and the magitude of the difference e , will be cos in the
direction of e . When perturbing , however, things are again similar to perturbing the polar angle in cylindrical case.
13