Professional Documents
Culture Documents
Andrea Giusti
DIFA - Università di Bologna & Collegio Superiore
Fall 2014
1
Course Description
Objectives
The aim of the course consist in providing a general understanding of the methods of solutions
for the most important PDE that arise in Mathematical Physics.
At the end of the course, the students should be able to:
• use few standard methods (separation of variables, Green’s functions,...) to solve some
elementary exercises.
Outline of Course
• Physical Properties;
• Well-posedness & Boundary conditions (Dirichlet, Neumann and Mixed BC);
• Separation of variables & Basics of Fourier Analysis;
• Uniqueness & the Energy Method;
• The Fundamental solution.
• Physical Properties;
• Well-posedness & Uniqueness;
• d’Alembert’s formula;
• The General Solution (1 + 1 dimensions);
• Causality;
• Uniqueness & the Energy Method.
2
Homework Assignments & Form of Assessment
Homework is perhaps the most important component of this course: it provides you with regular
feedback on whether or not you are keeping up with the material, and it challenges you to creatively
apply what you have already learned. There will be an assignment almost every two weeks.
Homework assignments will typically be posted on the course website.
For your own benefit, I encourage you to solve all the proposed exercises because they will be the
main topic of the exam, together with a subject of your choice within the topics discussed during
the course.
• A. C. King, J. Billingham and S.R. Otto - Differential Equations: Linear, Nonlinear, Ordi-
nary, Partial. Cambridge University Press, 2003;
3
Contents
1 Introduction 5
1.1 What is a PDE? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 The Mathematical Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 PDEs and Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Linear Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 How to solve PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 First Order PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6.1 Characteristic Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.6.2 Quasilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.3 Conservation Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7 Appendix: General Method of Characteristics . . . . . . . . . . . . . . . . . . . . 21
2 Transport Equation 23
2.1 Initial-value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Non-homogeneous Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5 Wave Equation 34
5.1 Initial Value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.1.1 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Energy Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4
Chapter 1
Introduction
Notations
We will be studying functions u : Rn −→ R, such that u = u(x1 , . . . , xn ) = u(x), (that can be
easily generalized to the case of vector fields u : Rn −→ Rm ) and their partial derivatives. Here
x1 , . . . , xn are the standard Cartesian coordinates on Rn . We will also use the alternate notation
u(x), u(x, y), u(x, y, z), etc. and u(r, θ, φ) for spherical coordinates on R3 .
Sometimes, we will also consider a time coordinate t, in which case (t, x1 , . . . , xn ) denotes the
standard Cartesian coordinates on R1+n .
We also use lots of different notation for partial derivatives:
∂u
≡ ∂xk u ≡ ∂k u ≡ uxk k ∈ {1, . . . , n}
∂xk
and so on also for derivatives of higher order.
Remark. The previous definition can also be restated in a more compact formalism, as follows.
For a given domain Ω ⊂ Rn , a function u : Ω → R and a real function F ∈ C 1 , the equation
∂ |α|
Dα := , |α| = α1 + · · · + αn
∂xα1 1 · · · ∂xαnn
5
Example 1. Consider the PDE:
−∂t2 u + (1 + cos u) ∂x3 u = 0 u = u(t, x)
this is a clear example of a third-order nonlinear PDE.
Example 2. Consider the PDE:
−∂t2 u + ∂x2 u + mu = 0 u = u(t, x) , m ∈ R+
0
More generally, we can say that a semilinear PDE has the following form:
X
aα (x)Dα u + a0 (DN −1 u, . . . , Du, u, x) = 0 (1.3)
|α|=N
More generally, we can say that a quasilinear PDE has the following form:
X
aα (DN −1 u, . . . , Du, u, x)Dα u + a0 (DN −1 u, . . . , Du, u, x) = 0 (1.4)
|α|=N
6
1.2 The Mathematical Problem
Suppose that we are interested in some physical system. A very fundamental question is:
“Which PDEs are good models for the system?”.
A major goal of modeling is to answer this question. Obviously, there is no general recipe
for answering this question. In practice, good models are often the end result of confrontations
between experimental data and theory.
The aim of this course is to discuss some important physical systems and the PDEs that are
commonly used to model them.
Now let’s assume that we have a PDE that we believe is a good model for our system of interest.
Then, the primary goals of PDE are to answer the following questions:
1. Does the PDE have any solutions? (aka Existence Problem)
3. Are the solutions corresponding to the given data unique? (aka Uniqueness Problem)
6. What happens if we slightly vary the data? Does the solution then also vary only slightly?
∂t u + (u · ∇)u + 1 ∇p = ν∆u
ρ (1.5)
∇·u=0
with u = u(t, x) is the velocity field of the fluid, ρ is the density of the fluid, p is the pressure, ν
def
is the viscosity and ∆ = ∇ · ∇ is the Laplacian differential operator.
7
The Maxwell Equations of Electromagnetism (in vacuum):
∇·E=0 ∇·B=0
1
∇ × E + ∂t B = 0 ∇ × B − 2 ∂t E = 0
c
with E = E(t, x) ∈ R3 and B = B(t, x) ∈ R3 .
where f = f (t, x, v) ≥ 0 is integrable with total unit mass, B is the collision kernel and
v + v∗ |v − v∗ | v + v∗ |v − v∗ |
v0 = +σ , v0∗ = −σ
2 2 2 2
~2
i~ ∂t ψ = − ∆ψ + V ψ
2m
where ψ = ψ(t, x) ∈ C and V = V (t, x) is a potential.
8
1.4 Linear Partial Differential Equations
Before we dive into a specific model, let’s discuss a distinguished class of PDEs that are
relatively easy to study. The PDEs of interest are called linear PDEs. Most of this course will
concern linear PDEs.
Remark. The notation was introduced out of convenience and laziness. The definition is closely
connected to the superposition principle.
Definition 3 (Linear PDEs). A Partial Differential Equation is linear if it can be written as:
Lu = f (x) (1.7)
Proposition 2. Let Sh be the set of all solutions to the homogeneous linear PDE, i.e.
def
Sh = {u : Ω ⊂ Rn → R | Lu = 0} (1.9)
Lu = f (x) (1.10)
Then the set S of all solutions to the inhomogeneous is the translation of Sh by u∗ , i.e.:
def
S = {uh + u∗ | uh ∈ Sh } (1.11)
Proof. Assume that Lu∗ = f and let w 6= u∗ be such that Lw = f , then L(u∗ − w) = f − f = 0
so that u∗ − w ∈ Sh . Thus, w = u∗ + (w − u∗ ) and so w ∈ S by definition.
On the other hand, if w ∈ S, then w = u∗ + uh for some uh ∈ Sh . Therefore, Lw = L(u∗ + uh ) =
f + 0 = f . Thus, w is a solution to the inhomogeneous equation (1.10).
9
1.5 How to solve PDEs
Before we start with studying the most important PDEs, I would like to express the following
(fundamental) remarks concerning solution of these kind of problems. Firstly, there is no general
recipe that works for all PDEs! We will develop some tools that will enable us to analyse
some important classes of PDEs.
In the second instance, usually, we do not have explicit formulas for the solutions to the
PDEs we are interested in! Instead, we are forced to understand and estimate the solutions
without having explicit formulas.
The two things that you typically need to study a PDE are: the PDE (obviously) and some
“data”.
where a, b, c, f ∈ C(Ω) in some region Ω ⊂ R2 and we assume that a, b 6= 0 for the same (x, y).
instead of a linear equation as the theory of the former does not require any special treatment as
compared to that of the latter.
Now, the key to the solution of the equation (1.12) is to find a change of variables:
10
If we now substitute these into equation (1.12) we get
(a ξx + b ξy ) wξ + (a ηx + b ηy ) wη + cw = f (1.17)
This is close to the form of equation (1.12); if we can choose η = η(x, y) such that
a ηx + b ηy = 0 (1.18)
ηx b
=− (1.19)
ηy a
Supposing that we can define a new variable η which fulfils this constraint, what is the equation
describing the curves of constant η?
Setting η = const. = k, then
dy ηx b
dη = ηx dx + ηy dy = 0 =⇒ =− = (1.20)
dx ηy a
So, the equation η(x, y) = k defines the solutions of the following Ordinary Differential Equation
(ODE):
dy b(x, y)
= (1.21)
dx a(x, y)
This equation is called the Characteristic Equation of the linear equation (1.12). Its solution
can be written in the form g(x, y, η) = 0 (where η is the constant of integration) and defines a
family of curves in the plane called characteristics or characteristic curves of (1.12).
Characteristics represent curves along which the independent variable η of the new coordinate
system (ξ, η) is constant.
x2 ux + y uy + xy u = 1 (1.22)
dy b(x, y) y
= = 2 (1.23)
dx a(x, y) x
11
This is an integral of the characteristic equation describing curves of constant η and so we choose
1
η(x, y) = ln y + (1.25)
x
Now, if we choose, for example, ξ(x, y) = x we have the Jacobian:
1
J = ηy = 6= 0
y
as required.
Since ξ = x,
1 1 1
η(x, y) = ln y + = ln y + =⇒ y = exp η − (1.26)
x ξ ξ
Now we apply the transformation
1
ξ=x, η = ln y +
x
with w(ξ, η) = u(x, y) and we have
wη
ux = wξ ξx + wη ηx = wξ − x−2 wη = wξ − 2
ξ
1
1
(1.27)
uy = wξ ξy + wη ηy = 0 + wη = exp − η wη
y ξ
Then the PDE becomes (prove it!)
exp (η − 1/ξ) 1
wξ + w= 2 (1.28)
ξ ξ
Result that concludes our example.
we have
eH wξ + h(ξ, η) eH w = F (ξ, η) eH (1.30)
thus
∂ H
e w = F (ξ, η) eH (1.31)
∂ξ
Now we integrate with respect to ξ, since η is being carried as a parameter, the constant of
integration may depend on η:
Z
e w = g(η) + F (ξ, η) eH dξ
H
(1.32)
12
in which g is an arbitrary differentiable function of one variable.
Now, the general solution of the transformed equation is
Z ξ
−H(ξ,η) −H(ξ,η) 0
w(ξ, η) = e g(η) + e F (ξ 0 , η) eH(ξ ,η) dξ 0 (1.33)
We then obtain the general solution of the original equation by substituting back ξ(x, y), η(x, y):
A certain class of first order PDEs (linear and semilinear PDEs) can then be reduced to a set of
ODEs. This makes use of the general philosophy that ODEs are easier to solve than PDEs.
dy b(x, y)
= (1.35)
dx a(x, y)
represent a one parameter family of curves whose tangent at each point is in the direction of the
vector n = (a, b). Note that the left-hand side of equation
dx dy
= a(x, y) ; = b(x, y) (1.38)
ds ds
Then we have
du dx dy
= ux + uy = a(x, y)ux + b(x, y)uy = κ(x, y, u) (1.39)
ds ds ds
which shows the variation of u along the curves. The one parameter family of characteristic curves
is parametrised by η, i.e. each value of η represents one unique characteristic.
The solution of equation (1.36) then reduces to the solution of the family of ODEs
du
= κ(x, y, u) (1.40)
ds
along each characteristics.
The parametric characteristic equations (1.38) have to be solved together with equation (1.40),
called the compatibility equation, to find a solution to semilinear equation (1.36).
13
The Cauchy Problem
Consider a curve Γ in R2 such that
Γ(σ) = (x0 (σ), y0 (σ)) (1.41)
The Cauchy problem consist in finding a solution of the equation
F (ux , uy , u, x, y) = 0 (1.42)
in a neighbourhood of Γ∗ (graph of Γ) such that
u = u0 (σ) on Γ∗ (1.43)
called Cauchy Data on Γ.
Remark. .
1. u can only be found in the region between the characteristics drawn through the endpoint
of Γ;
2. Characteristics are curves on which the values of u combined with the equation are not
sufficient to determine the normal derivative of u;
3. A discontinuity in the initial data propagates onto the solution along the characteristics.
These are curves across which the derivatives of u can jump while u itself remains continuous.
14
1.6.2 Quasilinear Equations
We consider first a special class of nonlinear equations where the nonlinearity is confined to
the unknown function u. The derivatives of u appear in the equation linearly. Such equations are
called quasilinear. More generally, we can say that a Quasilinear PDE has the following form:
X
aα (DN −1 u, . . . , Du, u, x)Dα u + a0 (DN −1 u, . . . , Du, u, x) = 0 (1.47)
|α|=N
Trying to be coherent with the previous sections, we are going to discuss in detail only the case of
quasilinear PDEs in two dimensions. Indeed, let us consider the most general quasilinear equation
of the first order:
a(x, y, u) ux + b(x, y, u) uy = c(x, y, u) (1.48)
where a, b, c ∈ C 1 (D), D ⊂ R3 , and a2 + b2 6= 0.
One can readily verify that the method of characteristics developed in the previous sections also
applies to the quasilinear case (1.48) as well.
Namely, each point on the initial curve Γ, defined as
The problem consisting of (1.48) and the latter initial conditions is called the Cauchy Problem for
quasilinear equations.
The main difference between the characteristic equations for the linear/semilinear case, i.e.
dx dy du
= a(x, y) , = b(x, y) , = κ(x, y, u)
ds ds ds
and the set (1.50) is that in the former case the first two equations are independent from the third
equation and of the initial conditions. In the quasilinear case, this uncoupling of the characteristic
equations is no longer possible, since the coefficients a and b depend upon u. We can also point
out that in the semilinear case, the equation for u is always linear, and thus it is guaranteed to
have a global solution (provided that the solutions x(t) and y(t) exist globally).
15
Example 6. Solve the Cauchy Problem
(
ux + uy = 2
(1.52)
u(x, 0) = x2
The characteristic equations and the parametric initial conditions are then given by
ẋ = 1 , ẏ = 1 , u̇ = 2
(1.53)
x(0, σ) = σ , y(0, σ) = 0 , u(0, σ) = σ 2
where we have made use of the convention ġ ≡ dg/ds.
Now, it is simple to integrate the previous ODEs:
x(s, σ) = s + f1 (σ) , y(s, σ) = s + f2 (σ) , u(s, σ) = 2s + f3 (σ) (1.54)
Upon substituting into the initial conditions, we find
x = s+σ, y = s, u = 2s + σ 2 (1.55)
We have thus obtained a parametric representation of the so called integral surface.
To find an explicit representation of the surface u as a function of x and y we need to invert the
relations x = x(s, σ) and y = y(s, σ) simultaneously (i.e. the Jacobian J must be non-vanishing
in the region in which we want to extend our solution).
In the current example the inversion is straightforward:
s = y, σ =x−y (1.56)
Thus the explicit representation of the integral surface (the solution) is given by
u(x, y) = 2y + (x − y)2 (1.57)
result which concludes our discussion.
Example 7. Solve the Cauchy Problem
(
ux + uy + u = 1
(1.58)
u|Ω = sin x , Ω = {(x, y) ∈ R2 | x > 0 , y = x + x2 }
The characteristic equations and the associated initial conditions are given by
ẋ = 1 , ẏ = 1 , u̇ = 1 − u
(1.59)
x(0, σ) = σ , y(0, σ) = σ + σ 2 , u(0, σ) = sin σ
respectively.
Now, if we compute the Jacobian
xs xσ
J = det = 2σ 6= 0 (1.60)
ys yσ
Thus we anticipate a unique solution at each point where σ 6= 0. Since we are limited to the
regime x > 0, i.e. σ > 0, we indeed expect a unique solution.
The parametric integral surface is then given by
x(s, σ) = σ + s
y(s, σ) = σ + σ 2 + s (1.61)
u(s, σ) = 1 − (1 − sin σ)e−s
16
In order to invert the mapping, we substitute the equation for x into the equation for y to obtain
√
σ = y−x
in particular, the sign of the square root was selected according to the condition x > 0.
Now it is easy to find √
s=x− y−x
whence the explicit representation of the solution is given by
√ √
u(x, y) = 1 − (1 − sin y − x) exp y − x − x (1.62)
D = Ω ∩ {y − x ≥ 0}
where the RHS (right-hand side) represents the flux of the quantity u through the boundary of
the domain.
If we consider the classical example of the Traffic Flow, given a street starting at point x1 and
ending at point x2 , the LHS (left-hand side) would represent the time variation of the the total
number of cars between points x1 and x2 .
Assuming u ∈ C 1 (D; t ≥ 0) we see that
Z x2
ut (ξ, t) dξ = −∆x f (u, x, t) (1.64)
x1
and, therefore,
x2
f (u(x2 , t), x2 , t) − f (u(x1 , t), x1 , t)
Z
1
ut (ξ, t) dξ = − (1.65)
x2 − x1 x1 x2 − x1
ut + fu ux + fx = 0 (1.67)
17
If we then assume that f (u, x, t) = f (u), thus we reach the well known one-dimensional non-
linear wave equation
ut + c(u) ux = 0 (1.68)
with c(u) ≡ f 0 (u).
The characteristic equations and the associated initial conditions are given by
ṫ = 1 , ẋ = c(u) , u̇ = 0
(1.70)
x(0, σ) = σ , t(0, σ) = 0 , u(0, σ) = φ(σ)
respectively.
Then, integrating the former ODEs, taking into account the initial conditions, one can find
t(σ, s) = s
x(σ, s) = s c(u) + σ (1.71)
u(σ, s) = φ(σ)
t(σ, s) = s
(1.72)
x(σ, s) = s F (σ) + σ
Now, if we compute the Jacobian
xs xσ dF (σ)
J = det =− s + 1 ≡ −[tF 0 (σ) + 1] (1.73)
ts tσ dσ
Thus J 6= 0 in a neighbourhood of s = 0 so we can actually find a unique regular solution for the
Cauchy Problem in a neighbourhood of the initial curve (Γ∗ = {(x, t) : x ∈ R , t = 0}).
On the other hand, there may exists some values for s (and thus also for t) such that J = 0, which
means that the solution becomes singular. These values, if they exist, are the root of the equation
1
tc := (F 0 (σ) < 0) (1.76)
max |F 0 (σ)|
18
From the characteristic equations we can also deduce that
t(σ, s) = s
x(σ, s) = t c(u) + σ (1.77)
u(σ, s) = φ(σ)
We can also recover an implicit expression for the integral surface, indeed
Now, if we calculate the derivatives with respect to x and t we get the following expressions (Prove
it!):
F (σ)
ut = φ0 (σ) σt = − φ0 (σ)
1 + tF 0 (σ)
(1.79)
φ0 (σ)
ux = φ0 (σ) σx =
1 + tF 0 (σ)
Remark. It is worthy to notice that if we multiply both sides of the former Cauchy Problem by
the term c0 (u) (supposing c0 (u) 6= 0) we get a new problem, namely
(
vt + v v x = 0 Ω = {(x, t) | x ∈ R , t ≥ 0}
(1.80)
v(x, 0) = g(x)
The characteristic equations and the parametric initial conditions are then given by
ṫ = 1 , ẋ = u , u̇ = 0
(1.83)
x(0, σ) = σ , t(0, σ) = 0 , u(0, σ) = A sin σ
19
In order to calculate the critical time we should notice that
A cos σ A2 sin σ cos σ
ux = , ut = − (1.87)
1 + tA cos σ 1 + tA cos σ
and then
1 1
tc = (cos σ < 0) =⇒ tc = (1.88)
max |A cos σ| A
indeed the cosine gets its maximum negative value, in x ∈ [0, 2π], in x = π.
20
1.7 Appendix: General Method of Characteristics
We finish by describing the general method for solving a first-order equation in n variables.
Consider the first-order, nonlinear equation,
n
X
n
F (x, u, ∇u) = 0 , x∈R , ∇= e i ∂ xi (1.89)
i=1
In the case of two spatial variables, we prescribed initial data on a curve Γ in R2 . Now we
must prescribe data on an (n − 1)-dimensional manifold Γ in Rn .
Remark. An m-dimensional manifold is a surface which can be represented locally as the graph
of a function.
Our Cauchy problem is (
F (x, u, ∇u) = 0 x ∈ Rn
(1.90)
u|Γ = φ
First, we parametrize Γ by the vector σ = (σ1 , . . . , σn−1 ) ∈ U ⊂ Rn−1 so that Γ(σ) = (x01 (σ), . . . , x0n (σ)).
By letting z(s) = u(x(s)) and pi (s) = uxi (x(s)), we rewrite our equation as
F (x, z, p) = 0 (1.91)
for i = 1, . . . , n.
Our initial conditions are given by
where the functions ψi (σ), i = 1, . . . , n are determined by solving the following equations.
First, we need,
F [x01 (σ), . . . , x0n (σ), φ(σ), ψ1 (σ), . . . , ψn (σ)] = 0
Second, we need
n
∂u X ∂u ∂xk
(σ, 0) =
∂σi k=1
∂xk ∂σi
for i = 1, . . . , n − 1.
But, u(σ, 0) = φ(σ), xi (σ, 0) = x0i (σ) and uxi = pi . Therefore, this equation becomes
n
X ∂x k
∂φ
= ψk 0
∂σi k=1
∂σi
Therefore, our system of n equations for the n unknown functions ψi (σ) are given by
n
X ∂x0k
φσi = ψk , i = 1, . . . , n − 1
k=1
∂σi (1.94)
F [x01 (σ), . . . , x0n (σ), φ(σ), ψ1 (σ), . . . , ψn (σ)] = 0
21
Again, functions ψi may not exist or may not be unique, but if they do exist, we can find a unique
solution of (1.92) satisfying the initial conditions (1.93) for that choice of ψi .
In order to guarantee that we can invert the function x = x(σ, s) near the manifold Γ we will
assume our initial data is noncharacteristic. That is, defining Ψ(σ) = (ψ1 (σ), . . . , ψn (σ)), we say
that {Γ(σ), φ(σ), Ψ(σ)} is noncharacteristic if
b · ∇p F = 0
N (1.95)
In summary, for noncharacteristic boundary data (Γ, φ, Ψ), we can find a local solution of (1.90) by
solving the characteristic equations (1.92) with initial conditions (1.93) and letting u(x) = z(σ, s).
More precisely, ∀σ ∈ U ⊂ Rn−1 , let (x(σ, s), z(σ, s), p(σ, s)) be the unique solution of (1.92), (1.93).
By the noncharacteristic assumption on the initial data, we can invert the function x = x(σ, s)
near s = 0. That is, we can find functions f, g such that σ = f(x) and s = g(x). Now, let
for x near Γ∗ .
22
Chapter 2
Transport Equation
One of the simplest PDEs is the transport equation with constant coefficients, i.e.
Now, which function u solve (2.1)? To answer this question let us suppose for the moment
that we are given some smooth solution u and try to compute it. To do so, we first must recognize
that the PDE (2.1) states that a particular directional derivative of u vanishes. We exploit this
insight by fixing any point (t, x) ∈ Ω and defining
Remark. As well known from Calculus, we have that a directional derivative of a function
u : D ⊂ Rm → R along a certain vector n ∈ Rm is given by:
∂u
= n · ∇u (2.4)
∂n
Then we can easily write:
∂u
= n · (∂t , ∇)u = ut + b · ∇u = 0 , n = (1, b) ∈ ]0, +∞[ × Rn (2.5)
∂n
as previously stated. /
23
2.1 Initial-value Problem
Let us consider the initial-value problem (IC)
(
ut + b · ∇u = 0 , in Ω = ]0, +∞[ × Rn
(2.6)
u(0, x) = g(x)
As before, given (t, x) ∈ R1+n and defining z(s) := u(t + s, x + sb) with s ∈ R, then
dz
= u(t + s, x + sb) + b · ∇u(t + s, x + sb) = f (t + s, x + sb) (2.9)
ds
Consequently,
Z 0 Z 0
dz
z(0) − z(−t) = u(t, x) − g(x − bt) = ds = f (t + s, x + sb) ds =
−t ds −t
Z t (2.10)
0 0 0 0
{s → s = s + t} ⇒ = f (s , x + (s − t)b) ds
0
and so
Z t
u(t, x) = g(x − bt) + f (s, x + (s − t)b) ds t ≥ 0 , x ∈ Rn (2.11)
0
24
Chapter 3
is invertible.
As before, we compute chain rule derivations
∂u ∂u ∂ξ ∂u ∂η
= +
∂x ∂ξ ∂x ∂η ∂x
∂u ∂u ∂ξ ∂u ∂η
= +
∂y ∂ξ ∂y ∂η ∂y (3.2)
∂ 2u
∂ ∂u ∂ξ ∂ ∂η ∂ ∂u ∂ξ ∂u ∂η
= = + +
∂x2 ∂x ∂x ∂x ∂ξ ∂x ∂η ∂ξ ∂x ∂η ∂x
and analogously for uyy , uxy
where we have written explicitly only the Principal Part of the PDE, precisely the one involving
the highest-order derivatives of u.
25
This expression may seems as difficult as the first one; on the other hand, we have the remarkable
freedom of setting A, B or C (depending on the kind of the PDE) to zero by means of a wise
choice of ξ and η.
Moreover, It is easy to verify that (Exercise 2)
So, provided J 6= 0 we see that the sign of the discriminant b2 − 4ac is invariant under coordinate
transformations. We can use this invariance properties to classify the equation.
Studying the structure of the Characteristic Equations (which is not of our concern) we have
to distinguish three different cases:
1. If b2 − 4ac > 0 we can find a change of variable (x, y) → (ξ, η) such that it transform the
original PDE into
uξη + (lower order terms) = 0 (3.5)
In this case the equation is said to be Hyperbolic and has two families of characteristics.
2. If b2 − 4ac = 0, a suitable choice for ξ still simplifies the PDE, but now we can choose η
arbitrarily and the equation reduces to the form
The equation is then said to be Parabolic and has only one family of characteristics.
3. If b2 − 4ac < 0 we can again apply the change of variables (x, y) → (ξ, η) to simplify the
equation, but now this functions will be complex conjugate. To keep the transformation
real, we apply a further change of variables (ξ, η) → (α, β) via
(
α=ξ+η
=⇒ uξη = uαα + uββ (3.7)
β = i (ξ − η)
In this case the equation is said to be Elliptic and has no real characteristics.
The above forms are called the Canonical Forms of the Second Order Linear/Semilinear equa-
tions in two variables.
• These definitions are all taken at a point x0 ∈ R2 ; unless a, b and c are all constant, the
type may change with the point x0 .
26
Example 9. Here are just a few examples concerning the previous statements.
utt − c2 uxx = 0
a = 0, b = 0, c = −κ =⇒ b2 − 4ac = 0
uxx + uyy = 0
27
Chapter 4
The heat equation for a function u(t, x), with x ∈ Rn and t > 0, is
n
X
ut − D ∆u = 0 , ∆ ≡ div ∇ = ∂x2i (4.1)
i=1
Here, the constant D > 0 is the diffusion coefficient and ∆ is the Laplacian operator.
In this Chapter we will briefly discuss the main result concerning this important equation, such
as: the fundamental solution, the technique of separation of variables and the energy method.
g\
t (t, x) = ∂t g
\x) = (ik)2 gb(t, k) ≡ −k 2 gb(t, k)
b(t, k) and ∆g(t, (4.5)
where k 2 := |k|.
Moreover,
(f \
∗ g)(x) = fb(k) gb(k) (4.6)
where the ∗ represents the Fourier convolution integral, i.e.
Z
(f ∗ g)(x) := f (x − y) g(y) dn y = (g ∗ f )(x)
Rn
Now, taking the Fourier transform of both the equation and the IC we get
(
bt + k 2 D u
u b=0 , u
b=u b(t, k)
(4.7)
u
b(0, k) = φ(k)
b
28
Multiplying both sides of the first equation by the integrating factor exp(Dk 2 t) the equation
becomes h 2 i
∂t eDk t u
b(t, k) = 0 (4.8)
Thus,
b(t, k) = f (k) exp(−Dk 2 t) ,
u f (k) arbitrary function (4.9)
Using the initial condition
u
b(0, k) = φ(k)
b =⇒ f (k) = φ(k)
b (4.10)
Thus, the Fundamental Solution for the Heat Equation is given by:
x2
1
Γ(t, x) = √ exp − (4.14)
4πDt 4Dt
end the general solution for the Cauchy problem is then given by:
Z
u(t, x) = (Γ(t) ∗ φ)(x) = Γ(t, x − y) φ(y) dn y (4.15)
Rn
29
4.2 Separation of Variables
In this section we introduce a very useful technique, called the method of separations of
variables, for solving initial boundary value-problems.
We consider the heat equation in (1 + 1)D satisfying the initial conditions
(
ut − Duxx = 0 , x ∈ [0, L] , t > 0
(4.16)
u(0, x) = φ(x) u = u(t, x)
We seek a solution u satisfying certain boundary conditions (BC); in the present case we consider
the so called Dirichlet BC, i.e.
Ṫ (t) X 00 (x)
= (4.20)
DT (t) X(x)
The left side depends only on t whereas the right hand side depends only on x. Since they are
equal, they must be equal to some constant −λ, with λ ∈ R. Thus
In addition, the function X which solves the second equation will satisfy boundary conditions
depending on the boundary condition imposed on u.
The problem (
X 00 (x) + λX(x) = 0
(4.24)
X satisfies boundary conditions
30
is called the eigenvalue problem, a non-trivial solution is called an eigenfunction associated
with the eigenvalue λ.
So, if we consider the Dirichlet condition:
X(0) = α = 0
√ (4.27)
X(L) = β sin( λL) = 0
√
Since X must be a non-trivial solution, β 6= 0 and hence sin( λL) = 0. Consequently, in order
to get a non-trivial solution we have
nπ 2
λn = , n≥1 (4.28)
L
and the corresponding eigenfunctions are given by
nπ 2 2
nπ
Xn (x) = βn sin x , and Tn (t) = An exp − 2 Dt (4.29)
L L
with Bn ≡ An βn .
Finally, thank to the linearity of the Heat Equation, we can obtain the general solution by means
of the superposition principle:
∞
X ∞
X
Bn exp −n2 π 2 Dt/L2 sin (nπx/L)
u(t, x) = un (t, x) = (4.31)
n=1 n=1
31
In order to compute the coefficients Bn , one should note the following results (Prove it!):
2 L
Z
sin (nπx/L) sin (mπx/L) dx = δnm
L 0
2 L
Z
cos (nπx/L) cos (mπx/L) dx = δnm (4.33)
L 0
Z L
sin (nπx/L) cos (mπx/L) dx = 0
0
Multiplying both sides of Eq. (4.32) by sin (mπx/L), with fixed m, and then integrating over
[0, L], we get:
2 L
Z
Bm = φ(x) sin (mπx/L) dx (4.34)
L 0
Remark. Where we have made use of the well known Kroneker delta which is defined as
(
1 , if n = m
δnm =
0 , if n 6= m
32
4.3 Energy Method
We next consider the inhomogeneous heat equation with some auxiliary conditions, and use
the energy method to show that the solution satisfying those conditions must be unique. Consider
the following mixed initial-boundary value problem, which is called the Dirichlet problem for the
heat equation
ut − Duxx = f ,
x ∈ [0, L] , t > 0
u(0, x) = φ(x) u = u(t, x) , f = f (t, x) (4.35)
u(t, 0) = g(x) , u(t, L) = h(x) ∀t > 0
which is always positive, and decreasing, if w solves the heat equation. Indeed, differentiating
the energy with respect to time, and using the heat equation we get
Z L Z L
dE
= wwt dx = D wwxx dx (4.38)
dt 0 0
33
Chapter 5
Wave Equation
In this first part we will solve the wave equation on the entire real line x ∈ R. This corresponds
to a string of infinite length.
The wave equation, which describes the dynamics of the amplitude u(t, x) of the point at position
x on the string at time t, has the following form
utt − c2 uxx = 0 (5.1)
As we saw, the wave equation has the second canonical form for hyperbolic equations.
By means of the characteristic equations, one can then rewrite this equation in the first canonical
form, which is
uξη = 0 (5.2)
This is achieved by passing to the characteristic variables
ξ = x + ct , η = x − ct (5.3)
In particular, we can show the equivalence between (5.1) and (5.2) , let us compute the partial
derivatives of u with respect to x and t in the new variables using the chain rule.
ux = uξ + uη
(5.4)
ut = cuξ − cuη
Now, if we realize that, for u ∈ C 2 , the second order linear operator of the wave equation (the
D’Alembert operator) factors into two first order operators
2 := ∂t2 − c2 ∂x2 = (∂t − c∂x )(∂t + c∂x ) (5.5)
which, under the coordinate transformation (x, t) → (ξ, η) becomes
2 = −4c2 ∂ξ ∂η (5.6)
as we expected.
Now, Eq. (5.2) can be treated as a pair two successive ODEs. Integrating first with respect to
the variable η, and then with respect to ξ, we arrive at the solution
u(ξ, η) = f (ξ) + g(η) (5.7)
Thus, the general solution is given by
34
5.1 Initial Value Problem
If we consider the following Cauchy Problem for the wave equation in (1 + 1)D
2
utt − c uxx = 0 ,
x ∈ R, t > 0
u(0, x) = φ(x) u = u(t, x) (5.9)
ut (0, x) = ψ(x)
where f 0 = fx .
To solve this system, we first integrate both sides of the latter equation, from 0 to x to get rid of
the derivatives on f and g, and rewrite the equations as
1 x f (0) − g(0)
Z
φ(x)
f (x) = + ψ(s) ds +
2 2c 0 2
Z x (5.13)
φ(x) 1 f (0) − g(0)
g(x) = − ψ(s) ds −
2 2c 0 2
Finally, substituting this expressions for f and g back into the general solution (5.8) we get
x+ct
φ(x + ct) + φ(x − ct)
Z
1
u(t, x) = + ψ(s) ds (5.14)
2 2c x−ct
This is the well known d’Alembert’s solution for the Cauchy Problem of our concern.
35
5.1.1 Causality
The value of the solution to the former Cauchy problem at a certain point (t0 , x0 ) can be
deduced from d’Alembert’s formula.
φ(x0 + ct0 ) + φ(x0 − ct0 ) 1 x0 +ct0
Z
u(t0 , x0 ) = + ψ(s) ds (5.15)
2 2c x0 −ct0
We can see that this value depends on the values of φ at only two points on the x axis, x0 + ct0
and x0 − ct0 , and the values of ψ only on the interval [x0 − ct0 , x0 + ct0 ].
For this reason, the interval [x0 − ct0 , x0 + ct0 ] is called interval of dependence for the point
(t0 , x0 ). Sometimes the entire triangular region with vertices at (0, x0 − ct0 ), (0, x0 + ct0 ) and
(t0 , x0 ) is called the domain of dependence, or the past history of the point (t0 , x0 ). The
sides of this triangle are segments of characteristic lines passing through the point (t0 , x0 ). Thus,
we see that the initial data travels along the characteristics to give the values at later times.
An inverse notion to the domain of dependence is the notion of domain of influence of the
point (0, x0 ). This is the region in the x − t plane consisting of all the points, whose domain of
dependence contains the point x0 . The region has an upside-down triangular shape, with the sides
being the characteristic lines emanating from the point (0, x0 ).
This also means that the value of the initial data at the point x0 affects the values of the solution
u at all the points in the domain of influence. Notice that at a fixed time t0 , only the points
satisfying x0 − ct0 ≤ x ≤ x0 + ct0 are influenced by the point (0, x0 ).
is unique.
Arguing from the inverse, let as assume that this problem has two distinct solutions, u and v.
Then their difference, w = u − v, satisfies the following Cauchy Problem
2
wtt − c wxx = 0 ,
x ∈ R, t > 0
w(0, x) = 0 w = w(t, x) (5.17)
w (0, x) = 0
t
E(t = 0) = 0 (5.19)
36
At the same time
Z ∞ Z ∞
dE 2 .
wtt − c2 wxx wt dx = 0 ∀t ≥ 0
= wt wtt + c wx wtx dx = (5.20)
dt −∞ −∞
.
where = stands for ”up to boundary terms”. So, we have been able to show that E(t) is conserved.
But since the integrand in the expression of the energy is non-negative, the only way the integral
can be zero, is if the integrand is uniformly zero. That is,
∇w = (wt , wx ) = 0
This implies that w = const. , ∀(t, x) ∈ R+ × R, but since w(0, x) = 0 ⇒ const. = 0. Thus,
which is in contradiction with our initial assumption of distinctness of u and v. This implies that
the solution to the former Cauchy problem is unique.
37