You are on page 1of 17

PDE (Math 5163 - 001) Spring 2012

Section 1. First order PDE.


In general we call an equation of the form
F(x, u, u
x
1
, . . . , , u
x
n
) = 0 , (FO)
a rst order PDE
Here u: D R(C) is an unknown function of dened on a set D R
n
and x = (x
1
, . . . x
n
) D.
A function u = u(x) is called a solution of the PDE in D if the equation
holds for all x D. We also say u solves the equation in D.
The partial derivatives of a function of several variables f(x) we denote the
partial derivatives by

x
i
f = f
x
i
= f
i
= D

f ,

2
x
i
x
j
= f
x
i
,x
j
= D

f
ij
,
with
= (0, . . . , 0, 1, 0, . . . , 0) (the 1 on the i-th position),
= (0, . . . , 0, 1, 0, . . . , 0, 1, 0, . . . , 0)
(the 1 on the i-th and j-th positions).
Example:

3
x
3
2

2
x
2
4
f = f
(22244)
= D
(0,3,0,2,0)
f
for a function f: R
5
R .
The equation (FO) is called linear if
F(x, u, u
x
1
, . . . , , u
x
n
) = 0
is a function linear or ane in (u
x
1
, . . . , , u
x
n
) .
It is called quasilinear if F is linear or ane in (u
x
1
, . . . , , u
x
n
) .
That is a linear equation we can write as
L(u): = a
1
(x)u
x
1
+. . . a
n
(x)u
x
n
+b(x)u = d(x) ,
and a quasilinear equation as
A(u): = a
1
(x, u)u
x
1
+. . . a
n
(x, u)u
x
n
+b(x, u) = d(x) ,
1
Here d(x) is called the inhomogeneous term or inhomogeneity. We call the
equations homogeneous if d 0 and inhomogeneous otherwise.
We say that the linear equation has constant coecients if a
1
, . . . , a
2
, b , d
are constants.
Equations with constant coecients.
Firstly, we consider
a
1
u
x
1
+. . . +a
n
u
x
n
= 0 ,
and remark that the left hand side is the directional derivative of u with
respect to the vector a = (a
1
, . . . , a
n
) . so we can write

a
u = 0 .
Recall:

a
u(x) = lim
h0
1
h
(u(x +ha) u(x)) ,
is the directional derivative with respect to the vector a .
So the PDE tells us that u is constant along lines with direction vector
a and that is all the information contained in the equation. Therefore a
function is not uniquely determined by an PDE.
Remark:
In general a PDE will not have an unique solution unless additional condi-
tions are supplied, we refer to those as auxiliary condition.
Note that we encountered that already in Calculus II:
An antiderivative F is a solution of the most basic dierential equation
F

= f(x)
and we already have a continuum of the solutions.
So with any PDE there comes the task to nd auxiliary condition, such
that together a solution is uniquely determined.
Sometimes problems consisting of a PDE and auxiliary condition(s) is
called well posed if it has a unique solution. But more often that notion
is used in a more restrictive manner, it also includes conditions how the
solution depends on the data. Here, data refers to the coecients of the
PDE as well as the terms of the auxiliary condition(s).
2
In our case we have a well posed problem if for instance we prescribe the
values for u in an hyperplane of R
n
which does not contain the vector
a . So if a
1
= 0 we may prescribe the values for u(0, x
2
, . . . , x
n
) , and we
conclude that
(IV P)
_
a
1
u
x
1
+. . . +a
n
u
x
n
= 0 in R
n
,
u(0, x
2
, . . . , x
n
) = f(x
2
, x
2
, . . . , x
n
)
has a unique solution, because each point x in R
n
is on a line with direction
vector a which intersects the hyperplane at some point x, say. At x, the
value of the solution u is xed by the initial condition
u(0, x
2
, . . . , x
n
) = f(x
2
, x
2
, . . . , x
n
).
Consequently, if a solution exist it is unique, at x its value has to be the
same value as the value at x.
So (IVP) allows at most one solution. But does a solution really exist?.
The above argument allows us to determine the solution, too, at least as
long as f is dierentiable.
The values of the solution u at a point x are determined by nding the
point x in the hypersurface given by x
1
= 0 which is on the line through
x with direction vector a .
The vectors on the line are given by x +ta , t R .
So for x = (x
1
, . . . , x
n
) and a = (a
1
, . . . , a
n
) and x = (0 , x
2
, . . . , x
n
) , we
have that
x +t
0
a = x.
It implies x
1
+t
0
a
1
= 0 or t
0
=
x
1
a
1
,
consequently our (IVP) if any must have the solution
u(x) = u(x
1
, x
2
, . . . , x
n
) = f(x
2

x
1
a
1
a
2
, . . . , x
n

x
1
a
1
a
n
) ,
and it is easy check that u is indeed a solution.
General linear equations
We consider
a
1
u
x
1
+. . . +a
n
u
x
n
+bu = d ,
where a
1
, . . . , a
n
, b , d are functions in x = (x
1
, . . . , x
n
) and we are
looking for curves for which the PDE carries information.
3
In the previous case those had been the lines x = x+ta . So more generally
lets look for parametric curves of the form x(t) = (x
1
(t) , . . . , x
n
(t)) for
some parameter t .
Then if u is a solution of the PDE we get that u(x(t)) is a function of one
variable and
d
d t
u(x(t)) =
n

i=1
u
x
i
d
d t
x
i
(t)
So if x(t) is a solution of the system
x

(t) = a(x) (DX)


then h(t) = u(x(t)) is a solution of the rst order linear ODE
h

(t) +

b(t)h(t) =

d(t) (DH)
with

b(t) = b(x(t)) , and

d(t) = d(x(t)) .
and we know how to solve rst order linear ODEs, (see below) and the
theory of systems of ODE provides the existence of a unique solution of the
initial value problem
IVP
x
_
x

(t) = a(x) ,
x(0) = x.
If ais a continuously dierentiable function of its arguments.
Solutions c(t) = (x(t), h(t)) , of (DX, DH) are called characteristic curves.
Similar as above we would like to construct with characteristic curves solu-
tion of the PDE satisfying some auxiliary condition.
In order to make the situation somewhat more transparent we consider only
the two dimensional case, instead of (x
1
, x
2
)we write (x, y) for points in
the plane R
2
.
In order to nd solutions of the PDE, we impose the additional (auxiliary)
conditions which prescribes the value of u along certain curves in R
2
. Say,
a curve given by a continuous dierentiable function
g(s) R
2
, for s R with g(0) = (x
0
, y
0
) ,
If g(s) and a solution (x(t), y(t)) of (DX) with (x(0), y(0)) = (x
0
, y
0
) do
not have another point of intersection (near (x
0
, y
0
) ), then for a given
function g
3
(s) , say, we can nd a characteristic curve
4
c
s
(t) = (x
s
(t), y
s
(t), h
s
(t))
of (DX),(HX) such that
(x
s
(0), y
s
(0), h
s
(0)) = (g
1
(s), g
2
(s), g
3
(s)) ,
for each s (near 0 ).
To go on now we assume that g
3
is continuously dierentiable function.
Then because of the dierentiable dependence of the solution of the ODE
on the initial values we obtain (near (0,0) in R
2
, ) dierentiable functions
X(s, t) , Y(s, t) , H(s, t) , which, for xed s , are the characteristic curves
c
s
(t) , that is
(X(s, t) , Y(s, t) , H(s, t)) = (x
s
(t), y
s
(t), h
s
(t)) .
Now we note that T(s, t) = (X(s, t) , Y(s, t)) is a local transformation from
R
2
R
2
which has an inverse T
1
, provided the Jacobian Matrix J
T
is
nonsingular.
Writing T
1
(x, y) = (S(x, y), T (x, y)) we claim that
u(x, y) = H(S(x, y) , T (x, y)) ,
solves the initial value problem.
_
a
1
u
x
+a
2
u
y
+bu = d ,
u(g
1
(s), g
s
(s)) = g
3
(s) .
Firstly, we note that it does satisfy the initial condition:
We have
u(g
1
(s), g
s
(s)) = u((x
s
(0), y
s
(0))) = h
s
(0) = g(s) .
To check the PDE we get
u
x
= H
s
S
x
+H
t
T
x
,
u
y
= H
s
S
y
+H
t
S
y
,
and
a
1
u
x
+a
2
u
y
= H
s
(a
1
S
x
+a
2
S
y
) +H
t
(a
1
T
x
+a
2
T
y
) .
Next we note that the Jacobian of T is given by
J
T
=
_
X
s
Y
s
X
t
Y
t
_
=
_
X
s
Y
s
a b
_
.
Likewise the Jacobian of T
1
is given by
5
J
T
1
=
_
S
x
T
x
S
y
T
y
_
.
Since
J
T
J
T
1
=
_
1 0
0 1
_
.
we have a
1
S
x
+a
2
S
y
= 0 , and a
1
T
x
+a
2
T
y
= 1 , yielding
a
1
u
x
+a
2
u
y
= H
t
(S(x, y) , T (x, y))
=

b(S(x, y) , T (x, y))H+



d(S(x, y)u, T (x, y))
= b(x, y) +d(x, y) .
So indeed we have a (local) solution if the coecient functions and g are
continuously dierentiable functions near the origin and J
T
is a nonsingular
matrix at 0 .
The uniqueness under those assumptions follows from the uniqueness of the
solution of the ODE:
Near (0, 0) a point (x
0
, y
0
) lies on a characteristic given by x
s
0
(t) =
X(s
0
, t) , y
s
0
(t) = Y(s
0
, t) , with s
0
= S(x
0
, y
0
) for which we have
(x
s
0
(0) = g
1
(s
0
) , y
s
0
(0) = g
2
(s
0
) .
At this point g
3
provides the value of u by
g
3
(s
0
) = u((x
s
0
(0), y
s
0
(0) .
Along a characteristic (x
s
0
(t), y
s
0
(t)) the values of u are given by
u(x
s
0
(t), y
s
0
(t)) = h
s
0
(t)) ,
with x, y, h, being solutions of the above ODEs with initial value
(x
s
0
(0) = g
1
(s
0
) , y
s
0
(0) = g
2
(s
0
) , h
s
0
(0)) = g
3
(s
0
) .
which determines the values of u along the characteristic (x
s
0
(t), y
s
0
(t)) ,
( and therefore the value of u(x
0
, y
0
) ) uniquely by the general existence
and uniqueness theorem for Systems of ODEs. We conclude that there is
at most one possible value the solution can attain at (x
0
, y
0
) .
Remark
Somewhat ambiguously the solution of (DX) are often called characteristic
curves, too. Here though, we will always refer to them as characteristics.
6
That is a characteristic is the projection of a characteristic curve onto the
domain in which the PDE is given.
Quasilinear Equations
We consider now the case for domains in R
n
in the more general setting of
quasilinear equations, that is we consider
a
1
u
x
1
+. . . +a
n
u
x
n
+b = 0 ,
where
a
1
, . . . , a
n
, b ,
are functions in (x, u) = (x
1
, . . . , x
n
, , u) .
In this case we can not separate the ODEs considering rst characteristics
x(t) = (x
1
(t), . . . , x
n
(t)) . But we note that the system of ODEs of n + 1
equations
(DES)
_

_
x

1
(t) = a
1
(x(t), h(t))

n
(t) = a
n
(x(t), h(t))
h

(t) = b(x(t), h(t))


with initial value
(IV )(x(0), h(0)) = (x, h)
has a unique continuously dierentiable solutions for vectors (x, h) R
n

R .
depending continuously dierentiable on its initial values. This allows to
argue similar as above to obtain the local existence and uniqueness result
for quasilinear system of rst order PDEs. We have
Theorem 1.1
Let D R
n
be a domain and let a
1
, . . . , a
n
, b: DR R , be continuous
dierentiable functions.
Let g: R
n1
R
n
and f: R
n
R , be dierentiable functions with g(0) =
x D, then the
(IV P)
_
a
1
u
x
1
+, . . . , +a
n
u
x
n
+b = 0 , in D,
u(x) = f(x) , on g(R
n1
)) D,
7
has a unique solution in a neighborhood of x in D provided the matrix
J
0
=
_
_
_
_
_
_
_
_
_
_

s
1
g
1
(0)

s
1
g
n
(0)


s
n1
g
1
(0)

s
n1
g
n
(0)
a
1
(x, f(g(o))), a
n
(x, f(g(0)))
_
_
_
_
_
_
_
_
_
_
,
is nonsingular.
Proof:
Let us consider the the system (DES) above of ODEs with initial value
(x(0), h(0)): = (g(s), f(g(s)) .
and consider these solutions (x
s
(t), h
s
(t)) as functions in (s, t) R
n1
R ,
writing
(X(s, t), H(s, t)) = (x
s
(t), h
s
(t)) .
In a neighborhood of (0, 0) R
n1
R , the mapping T: (s, t) X(s, t)
as a local transformation of R
n
R
n
is invertible if its Jacobian matrix
J
T
(0, 0) is non singular.
We have

s
j
X
i
(s, 0) =

s
j
g
i
(s) , so

s
j
X
i
(0, 0) =

s
j
g
i
(0) ,
and

t
X
i
(0, t) = a
i
(x
0
(t), h
0
(t)) , so

t
X
i
(0, 0) = a
i
(x, f(x)) .
Therefore
8
J
T
(0, 0) =
_
_
_
_
_
_
_
_
_
_
_

s
1
X
1


s
1
X
n


s
n1
X
1


s
n1
X
n

t
X
1
,

t
X
n
_
_
_
_
_
_
_
_
_
_
_
(0, 0) = J
0
,
which is nonsingular by assumption, and we claim that
u(x) = H(S(x), T (x)) .
is a solution in a neighborhood of x
0
, where the mapping x (S(x), T (x))
is the inverse mapping T
1
of T , which exist in a neighborhood of x
0
because of the implicit function theorem, i.e. we have
(s, t) = (S(X(s, t)), T (X(s, t))
For the auxiliary condition we note that for x = g(s) near x
0
, we have
u(x) = H(S(g(s)), T (g(s))) = H(S(X(s, 0)), T (X(s, 0)))
= H(s, 0) = h
s
(0) = f(g(s)) = f(x) .
To verify u is a solution of the PDE note that the Jacobian of T
1
is given
by
J
T
1
=
_
_
_
_
_
_
_
_

x
1
S
1


x
1
S
n1

x
1
T


x
n
S
1


x
n
S
n1

x
n
T
_
_
_
_
_
_
_
_
,
also
u
x
i
=

x
i
H(S(x), T (x)) = (
n1

j=1
H
s
j

x
i
S
j
) +H
t

x
i
T ,
and by denition

t
X(s, t) = x

s
(t) = a(x
s
(t), h
s
(t)) = a(X(s, t), H(s, t)) .
We get
9
n

i=1
a
i
u
x
i
=
n

i=1

t
X
i

x
i
H(S(x), T (x))
= (
n1

j=1
H
s
j
n

i=1

t
X
i

x
i
S
j
) +H
t
n

i=1
x

i
(t)

x
i
T ,
= (
n1

j=1
H
s
j
n

i=1

t
X
i

x
i
S
j
) +H
t
n

i=1

t
X
i
(t)

x
i
T ,
= H
t
= b(x, u) ,
because the product of the Jacobi matrices is the identity matrix.
It should be noted that the matrix
_
_
_
_
_
_
_
_

s
1
g
1
(0)

s
1
g
n
(0)


s
n1
g
1
(0)

s
n1
g
n
(0)
_
_
_
_
_
_
_
_
,
is of maximal rank by assumption, hence at g(0) = x. the set g(R
n1
) is
locally a C
1
-manifold in R
n
.
The uniqueness again follows from the uniqueness of the initial value prob-
lem of the ODE.
Example.
(AVP)
_
xu
x
+yu
y
+yu = 0,
u(x, 1) = g(x)
Note if we for instance x(t) is invertible, that is if we can write t = t(x)the
we get y as a function in x and we have
d
d x
y =
d
d t
y
d
d x
t = y
1
d
d t
x
=
y
x
.
Step 1.
10
Solve the ODE for the characteristics:
y

=
y
x
is a separable equation. We get
_
y

y
dx =
_
1
x
; or lny = lnx +c;
which in turn gives
y = e
ln x+c
= x; and =
y
x
.
Step 2.
Get the ODE for h(x) = u(x, y(x) and solve it.
(Since h(x) = u(x, x) we have h

= u
x
+u
y
= u
x
+
y
x
u
y
; and so
xh

+xh = x.
The integrating factor for the linear rst order ODE
h

+h = 1,
is exp(
_
dx) = e
x
here P(x) = , Q(x) = 1 . We get
d
dx
e
x
h = e
x
. So e
x
h =
1

e
x
+C()
or
h(x) = e
x
(
1

e
x
+C) =
1

+C()e
x
.
Step 3: =
y
x
is providing the general solution:
u(x, y) =
x
y
+C(
y
x
)e
y
.
Step 4: The auxiliary condition determines C :
We have
g(x) = u(x, 1) = x +C(
1
x
)e
1
.
With r =
1
x
, we get
C(r) = e(g(
1
r
)
1
r
) ,
11
and so the solution of the problem is
u(x, y) =
x
y
+e
1y
(g(
x
y
)
x
y
) .
Example 2
Nonlinear equation
(Inviscid) Burgers equation.
u
t
+uu
x
= 0 x.
(The viscid Burgers equation reads
u
t
+uu
x
=

x
u
,
where

x
u
is called the viscosity term.) The invest Burgers equation is
sometimes written as
u
t
+
1
2
((u)
2
)
x
= 0 .
The system of ODEs (DES) in this case with as variable now is
_

_
d
d
x = u,
d
d
t = 1 ,
d
d h
= 0 ,
because
d
d
t = 1 we have (up to a translation) = t , and we can have
replace the variable with t .
Writing the characteristics as (x(t), t) we must have

t
x(t) = u(x(t), t))
and we get

t
u(x(t), t) = u
x

t
x +u
t
= uu
x
= 0 ,
hence along characteristics the solution are constant, but then the charac-
teristics are straight lines.
12
For the initial value problem
_
u
t
+uu
x
= 0 ,
u(0, x) = g(x) ,
we conclude that the slope of a characteristic, as a function in t , is given
by the value of u at the point of interaction with the t = 0 line.
Consequently if for two points x
1
and x
2
with x
1
< x
2
we have
g(x
1
) < g(x
2
)
then the characteristics for through (g(x
1
), 0) and (g(x
2
), 0) , respectively,
will not intersect for t . Therefore, if g is nondecreasing, we have global,
continuously dierentiable solution in R
2
+
: = {(x, t) t > 0} .
On the other hand if
g(x
1
) > g(x
2
) ,
then characteristics through (g(x
1
), 0) and (g(x
2
), 0) , respectively, will in-
tersect for t , carrying constant but dierent values of the solution. There-
fore a continuous solution can not exist in all of R
2
+
.
That also allows us to write down the solution implicitly at least locally:
For the characteristics x = tu +x we must have
u(x, t) = u(0, x) = g(x) = g(x tu)
So the solution u(x, t) is determined by the implicit equation u = g(xtu) .
Weak solutions
The fact that u does not necessarily exist as a classical solution for all
time rises the question whether we can go around that fact if we general-
ize our notion of a solution. In the theory of PDE there are many such
generalizations.
Here we introduce the concept of a weak solution which is we consider the
solution as a (very special distribution.)
Note, if we multiply the PDE with a smooth function, i.e.: a function in
C
1
0
(R
2
) and integrate over the upper half plane then we get
0 =
_
t>0
(u
t
+f(u)
x
dxdt
13
_
R
u
t=0
dx
_
t>0
(u
t
+f(u)
x
dxdt ,
since the boundary integrals are all zero except for the one with t = 0 .
This is true for all test functions suggesting the following
Denition
A locally integrable function u is called a weak solution of the initial bound-
ary value problem
_
u
t
f(u)
x
= 0 ,
u(x, 0) = g(x) ,
if
_
R
2
+
u
t
+f(u)
x
dxdt +
_
R
gdx = 0 .
for all function C
1
0
(R
2
) .
Remark:
Note that the second (boundary) integral does not contain u. That is im-
portant as locally integrable functions are only dened a.e.. and so the
values of such a function at the boundary are not well dened.
If however the weak solution is smooth say in C
1
(R
2
+
) then it is indeed a
classical solution.
To see that we invoke the
Fundamental Theorem of the Calculus of Variations.
Which states that for domains R
n
two functions f, g: R are equal
a.e if
_

(f g)dx = 0 for all C

0
() .
If f, g are smooth functions then the equation holds everywhere.
Firstly, we use this to show that smooth weak solution satises the PDE in
in the classical sense:
For C

0
(R
2
+
) , we note that is zero for t = 0 . Reading the above
integration by parts backwards the boundary terms are vanishing and we
get
0 =
_
R
2
+
u
t
+f(u)
x
dxdt +
_
R
gdx
14
=
_
R
2
+
u
t
+f(u)
x
dxdt
=
_
R
2
+
(u
t
+f(u)
x
)dxdt .
So the calculus of variations gives (u
t
+f(u)
x
) = 0 . So indeed it is a classical
solution of the PDE.
Secondly, to see that it is satises initial condition in the usual sense, note
that now we know
_
R
2
+
(u
t
+f(u)
x
)dxdt = 0 .
for all C
1
0
(R
2
) and in the exercise above the boundary integrals to not
vanish. We get
0 =
_
R
2
+
u
t
+f(u)
x
dxdt +
_
R
gdx
=
_
R
udx
_
R
2
+
(u
t
+f(u)
x
)dxdt +
_
R
gdx
=
_
R
udx +
_
R
gdx
so we have we have
_
R
(g u(x, 0))
t=0
dx = 0dx,
for all (test functions) C
1
(R
2
) ) and we conclude again that
u(x, 0) = g(x) .
(Note: For C
1
(R) there is a C
1
(R
2
) such that

t=0
= C
1
(R
2
) )
Remark:
Now we can think of solution which are discontinuous, but just not all
discontinuities are allowed. Actually these have to be very special.
To see that we assume that D the support of a test function C
1
(R
2
)
is separated by a (smooth ) jump line into D
1
and D
2
and let
15
u
1
= u
D
1
and u
2
= u
D
2
be the solution on either side of the jump line. (For this assumption to make
sense we have to assume that both u
1
and u
2
can be extended continuously
to the jump line.)
The denition of a weak solution now gives:
0 =
_
R
2
+
0
u
t
+f(u)
x
dxdt =
_
D
1
u
1

t
+f(u
1
)
x
dxdt +
_
D
2
u
1

t
+f(u
2
)
x
dxdt
On the other hand we have for vectors elds (f(u
i
), u) the divergence
theorem and get
_

(f(u
i
), u) n
i
d =
_
D
i
div(f(u
i
), u
i
)dxdt
=
_
D
i
(f(u
i
)
x
+ (u
i
)
t
))dxdt +
_
D
i
f(u)
x
+ (u
i
)
t
dxdt
=
_
D
i
f(u)
x
+ (u
i
)
t
dxdt = 0
and consequently
0 =
_

(f(u
1
), u
1
) n
1
d +
_

(f(u
2
), u
2
) n
2
d
=
_

((f(u
1
) f(u
2
)) (n
1
)
1
+ (u
1
u
2
) (n
1
)
2
)d,
Here n
i
are the outer normals of the the sets D
i
along the boundary curve
, the jump line.
Since this is true for all test functions we must have
(f(u
1
) f(u
2
)) =
(n
1
)
2
(n
1
)
1
)
(u
1
u
2
) ,
at the boundary curve .
This condition usually is called the jump condition or Rankine - Hugoniot
condition, usually written in the form
16
s[u] = [f(u)]
with s =
d
d t
x when is parametrized with respect to t that is for (x, t)
we have x = x(t) .
(In this case we have (n
1
)
2
) x (n
1
)
1
) t )
17

You might also like