You are on page 1of 8

Section 4.

4.6

Variable-Coefficient Equations

193

EXERCISES

In Problems 18, nd a general solution to the differential


equation using the method of variation of parameters.
1. y  y  sec t
2. y  4y  tan 2t
3. y  2y  y  e t
4. y  2y  y  t 1e t
5. y  9y  sec2 A 3t B
6. y A u B  16y A u B  sec 4u
7. y  4y  4y  e 2t ln t
8. y  4y  csc2 A 2t B
In Problems 9 and 10, nd a particular solution rst by
undetermined coefcients, and then by variation of parameters. Which method was quicker?
9. y  y  2t  4
10. 2x A t B  2x A t B  4x A t B  2e2t
In Problems 1118, nd a general solution to the differential equation.
11. y  y  tan 2 t
12. y  y  tan t  e 3t  1
13. y  4y  sec4 A 2t B
14. y A u B  y A u B  sec3 u
15. y  y  3 sec t  t 2  1

4.7

16. y  5y  6y  18t 2
1
1
17. y  2y  tan 2t  e t
2
2
18. y  6y  9y  t 3e 3t
19. Express the solution to the initial value problem
1
y  y  ,
y A 1 B  0 , y A 1 B  2 ,
t
using denite integrals. Using numerical integration
(Appendix C) to approximate the integrals, nd an
approximation for y(2) to two decimal places.
20. Use the method of variation of parameters to show that
y A t B  c1 cos t  c2 sin t 

 f AsB sin At  sB ds
0

is a general solution to the differential equation


y  y  f A t B ,
where f A t B is a continuous function on A q, q B .
[Hint: Use the trigonometric identity sin A t  s B 
sin t cos s  sin s cos t .]
21. Suppose y satises the equation y  10y  25y 
3
et subject to y(0)  1 and y (0)  5. Estimate
y(0.2) to within 0.0001 by numerically approximating the integrals in the variation of parameters
formula.

VARIABLE-COEFFICIENT EQUATIONS
The techniques of Sections 4.2 and 4.3 have explicitly demonstrated that solutions to a linear
homogeneous constant-coefcient differential equation,
(1)

ay  by  cy  0 ,

are dened and satisfy the equation over the whole interval (q, q). After all, such solutions are combinations of exponentials, sinusoids, and polynomials.
The variation of parameters formula of Section 4.6 extended this to nonhomogeneous
constant-coefcient problems,
(2)

ay  by  cy  (t) ,

yielding solutions valid over all intervals where (t) is continuous (ensuring that the integrals in
(10) of Section 4.6 containing (t) exist and are differentiable). We could hardly hope for more;
indeed, it is debatable what meaning the differential equation (2) would have at a point where
f(t) is undened, or discontinuous.

194

Chapter 4

Linear Second-Order Equations

Therefore, when we move to the realm of equations with variable coefcients of the form
(3)

a2(t)y  a1(t)y  a0(t)y  (t) ,

the most we can expect is that there are solutions that are valid over intervals where all four
governing functions a2(t), a1(t), a0(t), and (t)are continuous. Fortunately, this expectation is fullled except for an important technical requirementnamely, that the coefcient
function a2(t) must be nonzero over the interval.
Typically, one divides by the nonzero coefcient a2(t) and expresses the theorem for the
equation in standard form [see (4), below] as follows.

Existence and Uniqueness of Solutions


Theorem 5. Suppose p(t), q(t), and g(t) are continuous on an interval (a, b) that contains the point t0. Then, for any choice of the initial values Y0 and Y1, there exists a
unique solution y(t) on the same interval (a, b) to the initial value problem
(4)

Example 1

y(t0)  Y0 ,

y(t0)  Y1 .

Determine the largest interval for which Theorem 5 ensures the existence and uniqueness of a
solution to the initial value problem
(5)

Solution

y(t)  p(t)y(t)  q(t)y (t)  g(t) ;

(t  3)

d 2y
dt

dy
 2t y  ln t ;
dt

y(1)  3 ,

y(1)  5 .

The data p(t), q(t), and g(t) in the standard form of the equation,
y  py  qy 

d 2y
dt

1 dy
2t
ln t

y
g ,
(t  3) dt
(t  3)
(t  3)

are simultaneously continuous in the intervals 0 6 t 6 3 and 3 6 t 6 q . The former contains the point t0  1, where the initial conditions are specied, so Theorem 5 guarantees (5)
has a unique solution in 0 6 t 6 3.
Theorem 5, embracing existence and uniqueness for the variable-coefcient case, is difcult
to prove because we cant construct explicit solutions in the general case. So the proof is deferred
to Chapter 13. However, it is instructive to examine a special case that we can solve explicitly.

CauchyEuler, or Equidimensional, Equations


Denition 2.
(6)

A linear second-order equation that can be expressed in the form

at2y(t)  bty(t)  cy  f(t) ,

where a, b, and c are constants, is called a CauchyEuler, or equidimensional,


equation.

Indeed, the whole nature of the equationreduction from second-order to rst-orderchanges at points where a2(t)
is zero.

All references to Chapters 1113 refer to the expanded text Fundamentals of Differential Equations and Boundary
Value Problems, 6th ed.

Section 4.7

Variable-Coefficient Equations

195

For example, the differential equation


3t2y  11ty  3y  sin t
is a CauchyEuler equation, whereas
2y  3ty  11y  3t  1
is not because the coefcient of y is 2, which is not a constant times t 2 .
The nomenclature equidimensional comes about because if y has the dimensions of,
say, meters and t has dimensions of time, then each term t 2y, ty, and y has the same dimensions (meters). The coefcient of y(t) in (6) is at 2 , and it is zero at t  0; equivalently, the
standard form
f(t)
b
c
y  at y  2 y  2
at
at
has discontinuous coefcients at t  0. Therefore, we can expect the solutions to be valid only
for t  0 or t
0. Discontinuities in , of course, will impose further restrictions.
To solve a homogeneous CauchyEuler equation, for t  0, we exploit the equidimensional
feature by looking for solutions of the form y  t r, because then t 2y, ty, and y each have the
form (constant)  t r:
y  tr ,

ty  trt r1  rt r ,

t 2y  t 2r (r  1)t r2  r (r  1)t r ,

and substitution into the homogeneous form of (6) (that is, with g  0) yields a simple
quadratic equation for r:
ar(r  1)t r  brt r  ct r  [ar 2  (b  a)r  c]t r  0 , or
(7)

ar 2  (b  a)r  c  0 ,

which we call the associated characteristic equation.


Example 2

Find two linearly independent solutions to the equation


3t 2y  11ty  3y  0 ,

Solution

t 7 0 .

Inserting y  t r yields, according to (7),


3r 2  (11  3)r  3  3r 2  8r  3  0 ,
whose roots r  1/3 and r  3 produce the independent solutions
y1(t)  t 1/3 ,

y 2(t)  t 3

(for t 7 0) .

Clearly, the substitution y  t r into a homogeneous equidimensional equation has the


same simplifying effect as the insertion of y  e rt into the homogeneous constant-coefcient
equation in Section 4.2. That means we will have to deal with the same encumbrances:
1. What to do when the roots of (7) are complex
2. What to do when the roots of (7) are equal
If r is complex, r  a  ib, we can interpret t aib by using the identity t  eln t and
invoking Eulers formula [equation (5), Section 4.3]:
t  i  t  t i  t ei ln t  t  [cos( lnt)  i sin( lnt)] .

196

Chapter 4

Linear Second-Order Equations

Then we simplify as in Section 4.3 by taking the real and imaginary parts to form independent
solutions:
y1  t cos( ln t) ,

(8)

y2  t sin( ln t) .

If r is a double root of the characteristic equation (7), then independent solutions of the
CauchyEuler equation on (0, q) are given by
y1  t r ,

(9)

y2  t r ln t .

This can be veried by direct substitution into the differential equation. Alternatively, the
second, linearly independent, solution can be obtained by reduction of order, a procedure to
be discussed shortly in Theorem 8. Furthermore, Problem 23 demonstrates that the substitution t  e x changes the homogeneous CauchyEuler equation into a homogeneous constantcoefcient equation, and the formats (8) and (9) then follow from our earlier deliberations.
We remark that if a homogeneous CauchyEuler equation is to be solved for t
0, then
one simply introduces the change of variable t  t, where t  0. The reader should verify
via the chain rule that the identical characteristic equation (7) arises when tr  A t B r is substituted in the equation. Thus the solutions take the same form as (8), (9), but with t replaced by
t; for example, if r is a double root of (7), we get A t B r and A t B r ln A t B as two linearly independent solutions on A q, 0 B .
Example 3

Find a pair of linearly independent solutions to the following CauchyEuler equations for t  0.
(a) t 2 y  5ty  5y  0

Solution

(b) t 2y  ty  0

For part (a), the characteristic equation becomes r 2  4r  5  0 , with the roots r  2  i ,
and (8) produces the real solutions t2 cos(ln t) and t2 sin(ln t).
For part (b), the characteristic equation becomes simply r 2  0 with the double root r  0,
and (9) yields the solutions t 0  1 and ln t.
In Chapter 8 we will see how one can obtain power series expansions for solutions to
variable-coefcient equations when the coefcients are analytic functions. But, as we said, there
is no procedure for explicitly solving the general case. Nonetheless, thanks to the existence/
uniqueness result of Theorem 5, most of the other theorems and concepts of the preceding sections are easily extended to the variable-coefcient case, with the proviso that they apply only
over intervals in which the governing functions p(t), q(t), g(t) are continuous. Thus we have the
following analog of Lemma 1, page 162.

A Condition for Linear Dependence of Solutions


Lemma 3.
equation
(10)

If y1(t) and y2(t) are any two solutions to the homogeneous differential

y(t)  p(t)y(t)  q(t)y(t)  0

on an interval I where the functions p(t) and q(t) are continuous and if the Wronskian
W[y1, y2 ](t) J y1(t)y2(t)  y1(t)y 2(t)  `

y1(t) y2(t)
`
y1(t) y2(t)

is zero at any point t of I, then y1 and y2 are linearly dependent on I.

The determinant representation of the Wronskian was introduced in Problem 34, Section 4.2.

Section 4.7

Variable-Coefficient Equations

197

As in the constant-coefcient case, the Wronskian of two solutions is either identically


zero or never zero on I, with the latter implying linear independence on I.
Precisely as in the proof for the constant-coefcient case, it can be veried that any linear
combination c1y1  c2y2 of solutions y1 and y2 to (10) is also a solution.

Representation of Solutions to Initial Value Problems


Theorem 6. If y1(t) and y2(t) are any two solutions to the homogeneous differential
equation (10) that are linearly independent on an interval I containing t0, then unique
constants c1 and c2 can always be found so that c1y1(t)  c2y2(t) satises the initial
conditions y(t0)  Y0, y(t0)  Y1 for any Y0 and Y1.
As in the constant-coefcient case, yh  c1y1  c2y2 is called a general solution to (10)
on I if y1, y2 are linearly independent solutions on I.
For the nonhomogeneous equation
(11)

y(t)  p(t)y(t)  q(t)y(t)  g(t) ,

a general solution on I is given by y  yp  yh, where yh  c1y1  c2y2 is a general solution to the corresponding homogeneous equation (10) on I and yp is a particular solution
to (11) on I. In other words, the solution to the initial value problem stated in Theorem 5 must
be of this form for a suitable choice of the constants c1, c2 . This follows, just as before, from a
straightforward extension of the superposition principle for variable-coefcient equations
described in Problem 30.
If linearly independent solutions to the homogeneous equation (10) are known, then yp can
be determined for (11) by the variation of parameters method.

Variation of Parameters
Theorem 7. If y1 and y2 are two linearly independent solutions to the homogeneous
equation (10) on an interval I where p(t), q(t), and g(t) are continuous, then a particular
solution to (11) is given by yp  y1y1  y2 y2, where y1 and y2 are determined up to a
constant by the pair of equations
y1Y1 y2Y2 0 ,
y1 Y1 y2 Y2 g ,
which have the solution
(12)

y1(t) 

g(t) y2(t)

 W[y , y ](t) dt ,
1

y2(t) 

 W[y , y ](t) dt .
g(t)y1(t)
1

Note the formulation (12) presumes that the differential equation has been put into
standard form [that is, divided by a2(t)].

The proofs of the constant-coefcient versions of these theorems in Sections 4.2 and 4.5
did not make use of the constant-coefcient property, so one can prove them in the general case
by literally copying those proofs but interpreting the coefcients as variables. Unfortunately,
however, there is no construction analogous to the method of undetermined coefcients for the
variable-coefcient case.

198

Chapter 4

Linear Second-Order Equations

What does all this mean? The only stumbling block for our completely solving nonhomogeneous initial value problems for equations with variable coefcients,
y  p(t)y  q(t)y  g(t) ;

y(t0)  Y0 ,

y(t0)  Y1 ,

is the lack of an explicit procedure for constructing independent solutions to the associated
homogeneous equation (10). If we had y1 and y2 as described in the variation of parameters
formula, we could implement (12) to nd yp, formulate the general solution of (11) as
yp  c1y1  c2y2, and (with the assurance that the Wronskian is nonzero) t the constants to
the initial conditions. But with the exception of the CauchyEuler equation and the ponderous
power series machinery of Chapter 8, we are stymied at the outset; there is no general procedure for nding y1 and y2 .
Ironically, we only need one nontrivial solution to the associated homogeneous equation,
thanks to a procedure known as reduction of order that constructs a second, linearly independent solution y2 from a known one y1. So one might well feel that the following theorem rubs
salt into the wound.

Reduction of Order
Theorem 8. Let y1(t) be a solution, not identically zero, to the homogeneous differential
equation (10) in an interval I (see page 196). Then
(13)

y2(t)  y1(t)

p(t)dt

 ey (t)
1

dt

is a second, linearly independent solution.

This remarkable formula can be conrmed directly, but the following derivation shows
how the procedure got its name.
Proof of Theorem 8. Our strategy is similar to that used in the derivation of the variation
of parameters formula, Section 4.6. Bearing in mind that cy1 is a solution of (10) for any constant c, we replace c by a function v(t) and propose the trial solution y2(t)  v(t)y1(t), spawning
the formulas
y2  vy1  vy1 ,

y2  vy1  2vy1  vy1 .

Substituting these expressions into the differential equation (10) yields


A vy1  2vy1  vy1 B  p A vy1  vy1 B  qvy1  0 ,

or, on regrouping,
(14)

A y1  py1  qy1 B v  y1v  A 2y1  py1 B v  0 .

The group in front of the undifferentiated v(t) is simply a copy of the left-hand member of the
original differential equation (10), so it is zero. Thus (14) reduces to
(15)

y1v  A 2y1  py1 B v  0 ,

which is actually a rst-order equation in the variable w  v :


(16)

y1w  A 2y1  py1 B w  0 .

This is hardly a surprise; if v were constant, vy would be a solution with v  v  0 in (14).

Section 4.7

Variable-Coefficient Equations

199

Indeed, (16) is separable and can be solved immediately using the procedure of
Section 2.2. Problem 50 carries out the details of this procedure to complete the derivation
of (13).
Example 4

Given that y1 A t B  t is a solution to


(17)

y 

1
1
y  2 y  0 ,
t
t

use the reduction of order procedure to determine a second linearly independent solution for
t  0.
Solution

Rather than implementing the formula (13), lets apply the strategy used to derive it. We set
y2 A t B  v A t B y1 A t B  v A t B t and substitute y2  vt  v, y2  vt  2v into (17) to nd
(18)

1
1
vt  2v  A vt  v B  2 vt  vt  A 2v  v B  vt  v  0 .
t
t

As promised, (18) is a separable rst-order equation in v , simplifying to A v B / A v B  1 / t


with a solution v  1 / t, or v  ln t (taking integration constants to be zero). Therefore, a second solution to (17) is y2  vt  t ln t.
Of course (17) is a CauchyEuler equation for which (7) has equal roots:
ar 2  A b  a B r  c  r 2  2r  1  A r  1 B 2  0 ,
and y2 is precisely the form for the independent solution predicted by (9).
Example 5

The following equation arises in the mathematical modeling of reverse osmosis.


(19)

A sin t B y  2 A cos t B y  A sin t B y  0 ,

0 6 t 6 p .

Find a general solution.


Solution

As we indicated above, the tricky part is to nd a single nontrivial solution. Inspection of (19)
suggests that y  sin t or y  cos t, combined with a little luck with trigonometric identities,
might be solutions. In fact, trial and error shows that the cosine function works:
y1  cos t ,

y1  sin t ,

y1  cos t ,

(sin t)y1  2(cos t)y1  (sin t)y1  (sin t) (cos t)  2(cos t)(sin t)  (sin t)(cos t)  0 .
Unfortunately, the sine function fails (try it).
So we use reduction of order to construct a second, independent solution. Setting
y2 A t B  v A t B y1 A t)  v A t B cos t and computing y2  v cos t  v sin t, y2  v cos t  2v sin t 
v cos t, we substitute into (19) to derive
Asin t B 3 v cos t  2v sin t  v cos t 4  2 A cos t B 3 v cos t  v sin t 4  A sin t B 3 v cos t 4

 v Asin t B Acos t B  2v Asin2 t  cos2 t B  0 ,

which is equivalent to the separated rst-order equation


A v B
A v B

A sin t B A cos t B

2

sec2 t
.
tan t

Reverse osmosis is a process used to fortify the alcoholic content of wine, among other applications.

200

Chapter 4

Linear Second-Order Equations

Taking integration constants to be zero yields ln v  2 ln A tan t B or v  tan2 t, and


v  tan t  t. Therefore, a second solution to (19) is y2  Atan t  t B cos t  sin t  tcos t.
We conclude that a general solution is c1cos t  c2 A sin t  t cos t B .

In this section we have seen that the theory for variable-coefcient equations differs only
slightly from the constant-coefcient case (in that solution domains are restricted to intervals),
but explicit solutions can be hard to come by. In the next section, we will supplement our exposition by describing some nonrigorous procedures that sometimes can be used to predict qualitative features of the solutions.

4.7

EXERCISES

In Problems 1 through 4, use Theorem 5 to discuss the


existence and uniqueness of a solution to the differential
equation that satises the initial conditions y A 1 B  Y0 ,
y A 1 B  Y1, where Y0 and Y1 are real constants.
1. t A t  3 B y  2ty  y  t2
2. A 1  t2 B y  ty  y  tan t
3. t 2y  y  cos t
y
 y  ln t
4. ety 
t3
In Problems 5 through 8, determine whether Theorem 5
applies. If it does, then discuss what conclusions can be
drawn. If it does not, explain why.
5. t 2z  tz  z  cos t ; z A 0 B  1 , z A 0 B  0
y A0B  1 ,
y A 0 B  1
6. y  yy  t 2  1 ;
y A0B  0 ,
7. y  ty  t 2y  0 ;
8. A 1  t B y  ty  2y  sin t ;
y A 0 B  1 , y A 0 B  1

y A1B  0

In Problems 9 through 14, nd a general solution to the


given CauchyEuler equation for t 7 0.
d 2y

dy
 2t
 6y  0
9. t
2
dt
dt
2

10. t 2y A t B  7ty A t B  7y A t B  0
d 2w

6 dw
4

 w0
t dt
dt 2
t2
d 2z
dz
 5t
 4z  0
12. t 2
dt
dt 2
11.

In Problems 15 through 18, nd a general solution


for t 6 0.
1
5
15. y A t B  y A t B  y(t)  0
t
t2
16. t 2y A t B  3ty A t B  6y A t B  0

17. t 2y A t B  9ty A t B  17y A t B  0


18. t 2y A t B  3ty A t B  5y A t B  0

In Problems 19 and 20, solve the given initial value problem for the CauchyEuler equation.
19. t 2y A t B  4ty A t B  4y A t B  0 ;
y A 1 B  2 ,
y A 1 B  11
20. t 2y A t B  7ty A t B  5y A t B  0 ;
y A 1 B  1 ,
y A 1 B  13
In Problems 21 and 22, devise a modication of the
method for CauchyEuler equations to nd a general
solution to the given equation.
21. A t  2 B 2y A t B  7 A t  2 B y A t B  7y A t B  0 ,
t 7 2

22. A t  1 B 2y A t B  10 A t  1 B y A t B  14y A t B  0 ,
t 7 1
23. To justify the solution formulas (8) and (9), perform
the following analysis.
(a) Show that if the substitution t  ex is made in
the function y A t B and x is regarded as the new
independent variable in Y A x B J y A ex B , the chain
rule implies the following relationships:

13. 9t 2y A t B  15ty A t B  y A t B  0
14. t y A t B  3ty A t B  4y A t B  0
2

dy
dY

,
dt
dx

t2

d 2y
dt

d 2Y
dx 2

dY
.
dx

You might also like