You are on page 1of 81

Western Kentucky University

TopSCHOLAR
Masters Teses & Specialist Projects Graduate School
5-1-2011
Deterministic and Stochastic Bellman's Optimality
Principles on Isolated Time Domains and Teir
Applications in Finance
Nezihe Turhan
Western Kentucky University, nezihe.turhan935@topper.wku.edu
Follow this and additional works at: htp://digitalcommons.wku.edu/theses
Part of the Dynamical Systems Commons, and the Finance and Financial Management
Commons
Tis Tesis is brought to you for free and open access by TopSCHOLAR. It has been accepted for inclusion in Masters Teses & Specialist Projects by
an authorized administrator of TopSCHOLAR. For more information, please contact connie.foster@wku.edu.
Recommended Citation
Turhan, Nezihe, "Deterministic and Stochastic Bellman's Optimality Principles on Isolated Time Domains and Teir Applications in
Finance" (2011). Masters Teses & Specialist Projects. Paper 1045.
htp://digitalcommons.wku.edu/theses/1045

DETERMINISTIC AND STOCHASTIC BELLMANS OPTIMALITY
PRINCIPLES ON ISOLATED TIME DOMAINS
AND THEIR APPLICATIONS IN FINANCE

A Thesis
Presented to
The Faculty of the Department of Mathematics and Computer Science
Western Kentucky University
Bowling Green, Kentucky

In Partial Fulfillment
Of the Requirements for the Degree
Master of Science
By
Nezihe Turhan

May 2011
DETERMINISTIC AND STOCHASTIC BELLMAN'S OPTIMALITY
PRINCIPLES ON ISOLATED TIME DOMAINS
AND THEm APPLICATIONS IN FINANCE
Date Recommended 04120/2011
dMkt~
Dr. Ferhan Atici, Director of Thesis
~~tlb-
conomics)
ACKNOWLEDGEMENTS
It is my pleasure to thank those who made my thesis possible. First and
foremost I oer my deepest gratitude to my advisor, Dr. Ferhan Atici, whose
encouragement, guidance and support from the initial to the nal level enabled me
to develop an understanding of the subject. She has made available her support
in a number of ways. Also, I gratefully thank Dr. Melanie Autin and Dr. Alex
Lebedinsky for serving in my committee. I am fortunate that in the midst of all
their activities, they accepted to be members of the thesis committee and supported
my study.
My special thanks go to my beloved family for their understanding and endless
love, through the duration of my studies at Western Kentucky University.
iii
TABLE OF CONTENTS
1 Introduction 1
2 Preliminaries 7
2.1 Time Scale Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Stochastic Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Deterministic Dynamic Programming 16
3.1 Deterministic Dynamic Sequence Problem . . . . . . . . . . . . . . 16
3.2 Deterministic Bellman Equation . . . . . . . . . . . . . . . . . . . . 18
3.3 Bellmans Optimality Principle . . . . . . . . . . . . . . . . . . . . 20
3.4 An Example in Financial Economics: Optimal Growth Model . . . 27
3.4.1 Solving the Bellman Equation by Guessing Method . . . . . 28
3.5 Existence of Solution of the Bellman Equation . . . . . . . . . . . . 42
4 Stochastic Dynamic Programming 46
4.1 Stochastic Dynamic Sequence Problem . . . . . . . . . . . . . . . . 47
4.2 Stochastic Bellman Equation . . . . . . . . . . . . . . . . . . . . . . 48
4.3 Stochastic Optimal Growth Model . . . . . . . . . . . . . . . . . . . 48
4.3.1 Solving the Stochastic Bellman Equation by Guessing Method 49
4.3.2 Distributions of the Optimal Solution . . . . . . . . . . . . . 57
5 Conclusion and Future Work 69
Bibliography 71
iv

v

DETERMINISTIC AND STOCHASTIC BELLMANS OPTIMALITY
PRINCIPLES ON ISOLATED TIME DOMAINS
AND THEIR APPLICATIONS IN FINANCE
Nezihe Turhan May 2011 72 Pages
Directed by: Dr. Ferhan Atici
Department of Mathematics and Computer Science Western Kentucky University

The concept of dynamic programming was originally used in late 1949, mostly
during the 1950s, by Richard Bellman to describe decision making problems. By 1952,
he refined this to the modern meaning, referring specifically to nesting smaller decision
problems inside larger decisions. Also, the Bellman equation, one of the basic concepts in
dynamic programming, is named after him. Dynamic programming has become an
important argument which was used in various fields; such as, economics, finance,
bioinformatics, aerospace, information theory, etc. Since Richard Bellman's invention of
dynamic programming, economists and mathematicians have formulated and solved a
huge variety of sequential decision making problems both in deterministic and stochastic
cases; either finite or infinite time horizon. This thesis is comprised of five chapters
where the major objective is to study both deterministic and stochastic dynamic
programming models in finance.
In the first chapter, we give a brief history of dynamic programming and we
introduce the essentials of theory. Unlike economists, who have analyzed the dynamic
programming on discrete, that is, periodic and continuous time domains, we claim that
trading is not a reasonably periodic or continuous act. Therefore, it is more accurate to
demonstrate the dynamic programming on non-periodic time domains. In the second
chapter we introduce time scales calculus. Moreover, since it is more realistic to analyze

vi

a decision makers behavior without risk aversion, we give basics of Stochastic Calculus
in this chapter. After we introduce the necessary background, in the third chapter we
construct the deterministic dynamic sequence problem on isolated time scales. Then we
derive the corresponding Bellman equation for the sequence problem. We analyze the
relation between solutions of the sequence problem and the Bellman equation through the
principle of optimality. We give an example of the deterministic model in finance with all
details of calculations by using guessing method, and we prove uniqueness and existence
of the solution by using the Contraction Mapping Theorem. In the fourth chapter, we
define the stochastic dynamic sequence problem on isolated time scales. Then we derive
the corresponding stochastic Bellman equation. As in the deterministic case, we give an
example in finance with the distributions of solutions.
CHAPTER 1
INTRODUCTION
Dynamic optimization is a mathematical theory used to solve optimization
problems in a variety of elds; such as, economics, nance, industry, informa-
tion theory, bioinformatics, aerospace, etc. The theory consists of many theories;
for instance, theory of calculus of variations, optimal control theory, which is an
extension of calculus of variations, and lastly dynamic programming. Dynamic
programming (DP) is a recursive method for solving sequential (or multistage) de-
cision problems. Theory has enabled economists and mathematicians to construct
and solve a huge variety of sequential decision making problems both in determin-
istic and stochastic cases; the former is decision making under certainty, and the
latter is under uncertainty.
The concept of dynamic programming has become an important principle in de-
cision making processes with studies and contributions of a mathematician named
Richard Bellman. Bellman started working for RAND Corporation in the sum-
mer of 1949, and was encouraged to work on multistage decision processes by Ed
Paxson, who was an employee of the corporation and friend of Richard Bellman.
The story that lies behind the choice of the name "dynamic programming" was
told by Richard Bellman in his autobiography and retold by Dreyfus in [8] . The
RAND Corporation was employed by the Air Force, and the person in charge was
a gentleman named Wilson. Wilson had a fear for the word research in the literal
meaning. Therefore, not to oend this gentleman during his visit to the corpo-
ration, Bellman used the word programming instead of mathematical. Since the
process is multistage, i.e., time-varying, he decided on the word dynamic. Since
then, the theory has been called dynamic programming. Bellman published his
1
rst paper [4] on the subject in 1952, and his rst book [5] in 1957. He published
many papers and books on the theory and its applications in both deterministic
and stochastic cases. Also, the Bellman equation, which is a necessary condition
for optimal solution, is named after him.
Since Richard Bellmans discovery of the theory, economists have been studying
both discrete, i.e., periodic, and continuous time. Stokey and Lucas did ground-
work in [18] that involves both cases with deterministic and stochastic versions.
Romer [15] studied dierent economics models, such as, the Solow Growth model,
the Ramsey-Cass-Koopmans model, and the Diamond model on continuous time.
Conversely, Hassler analyzed the discrete time growth model and solved the prob-
lem explicitly in [10] . However, we intend to attract attention to a dierent view
of the theory, which is called dynamic programming on non-periodic time domains.
In this thesis we claim that, trading is not a reasonably periodic and continuous
act. Consider a stock market example. A stockholder purchases stocks of a partic-
ular company on varying time intervals. Assume that on the rst day of the month,
a stockholder purchased stocks in every hour, and in the following two days, the
company did a good prot, which made the stockholder purchase several times in
an hour. Unluckily, the company could not manage well the rest of the month, so
the stockholder bought only one stock on the last day of the month. Since there
is not a strict rule on the behavior of the stockholder, we cannot tell if purchases
had been made on a regular and continuous base. On the contrary, depending on
the management of the prot and goodwill of the company, the stockholder did
purchase on non-periodic times. This argument leads our research to time scales
calculus, exclusively isolated time scales, and we formulate both deterministic and
stochastic growth models on any isolated time scales.
2
Dynamic programming can be studied on both innite or nite horizons. First
we give a brief description of both cases on discrete time deterministic and sto-
chastic models.
The Finite Horizon Case Consider the nite horizon control problem on dis-
crete time, for t = 0. 1. 2. .... 1 and 1 < .
(o1) max
a(t),c(t)
T
I=0
T

t=0
,
t
1 (r (t) . c (t))
:.t. r (t + 1) (r (t) . c (t))
r (0) A qic:.
where r (t) is the state and c(t) is the control variable, and 0 < , < 1 is the discount
factor. 1 is the concave and bounded objective function, and (r (t) . c (t)) is the
set of constraints that determines the feasible choices of c (t) given r (t) . Deter-
ministic dynamic programming is also called decision making under certainty. The
consumer makes decisions in each stage with risk aversion; thus, there occurs a se-
quence of decisions / (t) . c(t)
T
t=0
. Our goal is to nd the sequence / (t) . c(t)
T
t=0
that gives maximum discounted utility in (o1). To do so, it is worthwhile to in-
troduce the concept of the value function, \ (t. r(t)) . which is the solution of the
corresponding Bellman equation that is given by
\ (t. r(t)) = 1 (r (t) . c (t)) + ,\ (t + 1. r(t + 1))
:.t. r (t + 1) (r (t) . c (t)) .
A value function describes the maximum present discounted value of the objective
function from a specic point in time as a function of the state variables at that
date. Thus, we achieve our goal by obtaining the value function \ (t. r(t)) . Since
there is a terminal condition in nite horizon problems, we can solve the Bellman
equation, either by guessing or iteration of the value function.
3
The Innite Horizon Case This time we let the time variable t be innite,
i.e., t = 0. 1. 2. .... Then the sequence problem has the form
(o1) max
a(t),c(t)
1
I=0
o

t=0
,
t
1 (r (t) . c (t))
:.t. r (t + 1) (r (t) . c (t))
r (0) A qic:.
In innite horizon, there is no terminal condition. Instead of this, we have a
transversality condition.
Since time is innite, it is not practical to use iteration of the value function
on the innite horizon. Moreover, since
max lim
To
T

c=0
,
t
1 (r (:) . c (:)) ,= lim
To
max
T

c=0
,
t
1 (r (:) . c (:)) .
we cannot just nd the solution on the innite horizon by taking the limit of the
nite horizon solution as 1 . Therefore, we use Richard Bellmans Principle
of Optimality to solve the innite horizon sequence problem.
Bellmans Principle of Optimality An optimal policy has the property
that, whatever the initial state and initial decision are, the remaining decisions
must constitute an optimal policy with regard to the state resulting from the initial
decision.
By guessing method, we identify the value function \ (t. r(t)), and then analyze
the link between optimal solution for (o1) and \ (t. r(t)) via Bellmans Principle
of Optimality.
We also formulate the stochastic dynamic programming in innite horizon in
chapter 4 because unlike the deterministic model, a stochastic model is more re-
4
alistic. It reects the behavior of the decision maker without risk aversion. In the
stochastic model, the sequence problem above has the form
(o1) max
a(t),c(t)
1
I=0
E
o

t=0
,
t
1 (r (t) . c (t))
:.t. r (t + 1) (r (t) . c (t) . . (t)) .
where . (t) is the random state variable. That means future consumption is uncer-
tain. Then, the corresponding stochastic Bellman equation has the form
\ (t. r(t)) = 1 (r (t) . c (t)) + ,E[\ (t + 1. r(t + 1))]
:.t. r (t + 1) (r (t) . c (t) . . (t)) .
Eventually, the solution of the Bellman equation involves the random variable
. (t), which makes the explicit solution impossible. Therefore, we characterize
the distributions of solutions. We refer to Karatzas [12] and Bergin [6] for more
detailed discussion.
Consequently, we underline the fact that dynamic programming is a better
alternative to calculus of variations and optimal control theory. Unlike optimal
control or calculus of variations, dynamic programming has been applied to prob-
lems in both discrete and continuous time. Dynamic programming can also deal
with problems that are formulated on innite horizon, where the other two meth-
ods remain inadequate to deal with the time inconsistency. Moreover, we have
more exibility on constraints in dynamic programming. Another benet of using
dynamic programming in decision making problems is that, we can extend the
theory to the stochastic case, in which there is uncertainty. We refer to Kamien
and Schwartz [11] for further reading on calculus of variations and optimal control
in economics.
5
Recently, some papers were published about dynamic programming on time
scales. Zhan, Wei, and Xu analyzed the Hamilton-Jacobi-Bellman equations on
time scales in [19] . Similarly, Seiert, Sanyal and Wunsch studied the Hamilton-
Jacobi-Bellman equations and approximate dynamic programming on time scales
in [17] . Also Atici, Biles and Lebedinsky studied applications of time scales to
economics and analyzed a utility maximization problem on multiple time scales in
[2] and [3], respectively.
6
CHAPTER 2
PRELIMINARIES
As mentioned in the previous chapter, since trading is not a reasonably periodic
and continuous act, we formulate both deterministic and stochastic models on
isolated time domains to imply the scattered way of trading over time. Because
time scale calculus is a mathematical theory that explains the structure of such
domains, it is benecial to comprehend this theory. Also, we introduce stochastic
calculus briey to understand the behavior of solution under uncertainty.
2.1 Time Scale Calculus
In this section, for the readers convenience, we shall mention some basic
denitions on time scale calculus. A time scale T is a nonempty closed subset of
the real numbers R. Examples of such sets are the real numbers R. the integers Z.
the natural numbers N. Cc:to: :ct. the harmonic numbers H
a

o
a=0
where H
0
= 0
and H
a
=
a
I=1
1
I
, and
N
=
a
p : N where 1. On the other hand, the
rational numbers Q. the irrational numbers RQ. the complex numbers C. and the
open interval (0. 1) are not time scales.
A time scale T may or may not be connected. Therefore we should dene the
forward jump operator o (t) and the backward jump operator j (t) . For t T we
dene o. j: T T by
o (t) = inf : T: : t and j (t) = sup : T: : < t .
For the empty set O, the above denition includes inf O = sup T and sup O = inf T.
A point t T is called right-scattered if o (t) t, right-dense if o (t) = t, left-
scattered if j (t) < t, and left-dense if j (t) = t. If j (t) < t < o (t) then t is
7
called an isolated point, and if j (t) = t = o (t), then t is called a dense point. If
sup T < and sup T is left-scattered, we let T
i
= Tsup T; otherwise T
i
= T.
A time scale T is called an isolated time scale if every t T is an isolated
point. For instance, the integers Z. the natural numbers N. and
N
=
a
p : N
where 1 are all isolated time scales. For our future purposes, we will assume
that T is an isolated time scale with sup T = .
The graininess function j: T [0. ) is dened by
j(t) = o (t) t.
Note that for any isolated time scale, the graininess function is strictly positive.
Denition 1 If , : T R is a function and t T
i
, we dene the delta derivative
of , at point t to be the number ,

(t) (provided it exists) with the property that,


for any 0, there exists a neighborhood l of t such that

[, (o (t)) , (:)] ,

(t) [o (t) :]

_ [[o (t) :][ .


for all s |
T
= l T.
Moreover, if the number ,

(t) exists for all t T


i
. then we say that it is delta
dierentiable on T
i
. The delta derivative is given by
,

(t) =
, (o (t)) , (:)
j(t)
(2.1)
if it is continuous (provided , is dierentiable) at t and t is right-scattered. If t is
right-dense, then the derivative is given by
,

(t) = lim
ct
+
, (o (t)) , (:)
t :
= lim
ct
+
, (t) , (:)
t :
8
provided the limit exists. As an example, if T = R we have ,

(t) = ,
0
(t), and if
T = Z we have ,

(t) = , (t) = , (t + 1) , (t) .


For the function , : T R we dene ,
o
: T
i
R by
,
o
(t) = , (o (t)) for all t T
i
.
If , is dierentiable at t T
i
. then
, (o (t)) = , (t) + j(t) ,

(t) .
and for any dierentiable function q, the product rule on time scale T is given by
(,q)

(t) = ,

(t) q (t) + , (o (t)) q

(t) = , (t) q

(t) + ,

(t) q (o (t)) . (2.2)


Denition 2 A function , : T R is called rd-continuous provided it is continu-
ous at right-dense points in T and its left-sided limits exist at left-dense points in
T. The set of rd-continuous functions is denoted by C
vo
.
Denition 3 A function 1 : T R is called an antiderivative of , : T R
provided 1

(t) = , (t) for all t T

. Then we dene the Cauchy integral of


1

(t) = , (t) from c to t by


_
t
o
,(:) : = 1 (t) 1 (c) .
If T is an isolated time scale, then the Cauchy integral has the form
_
b
o
,(t) t =

t[o,b)T
, (t) j(t) .
where c. / T with c < /. In the case where T = R, we have
_
b
o
,(t) t =
_
b
o
,(t) dt.
9
and if T = Z, then
_
b
o
,(t) t =
b1

t=o
, (t) , c < /.
As an example, if T = Z , and , (t) = :
t
where : ,= 1 constant, then we have
_
b
o
:
t
t =
b1

t=o
:
t
=
:
t
: 1
_
b
o
=
:
b
:
o
: 1
.
Theorem 4 If , C
vo
and t T
i
, then
_
o(t)
t
, (t) t = j(t) , (t) .
If ,. q C
vo
and c. / T, then the integration by parts formula on time scales
is given by
_
b
o
, (o (t)) q

(t) t = (,q) (/) (,q) (c)


_
b
o
,

(t) q (t) t.
Now in order to denote the generalized exponential function c
j
(t. :), we need
to give some denitions.
First, we say that a function j : T R is regressive if
1 + j(t) j (t) ,= 0 for all t T

.
The set of all regressive functions on time scale T forms an Abelian group under
the circle plus addition dened by
(j ) (t) = j (t) + (t) + j(t) j (t) (t) for all t T

.
and the inverse j of the function j is dened by
(j) (t) =
j (t)
1 + j(t) j (t)
for all t T

.
10
The set is given with
=, : T R: , i: :d co:ti:non: c:d :cq:c::ic .
Now, for j we dene the exponential function c
j
(.. t
0
), to be the unique
solution of the initial value problem

= j (t) . (t
0
) = 1.
Next we will list the properties of the exponential function c
j
(.. t
0
) that are very
useful for our goals.
Lemma 5 For j we have
i) c
0
(t. :) = 1
ii) c
j
(t. t) = 1
iii) c
j
(t. :) =
1
c
j
(:. t)
= c
j
(:. t)
i) c
j
(t. :) c
j
(:. :) = c
j
(t. :)
) c
j
(t. :) c
q
(t. :) = c
jq
(t. :)
where :. :. t T.
We dene the set of positive regressive functions by

+
= j : 1 + j(t) j (t) 0 for all t T .
If j is positive regressive, then the exponential function c
j
(t. t
0
) is positive for c||
t T[1/co:c: 2.48 (i) in [7]].
11
Here are some examples of c
j
(t. t
0
) on some various time scales. If T = R, then
c
c
(t. t
0
) = c
c(tt
0
)
, if T = /Z then, c
c
(t. t
0
) = (1 + c/)
II
0
I
. This implies that
c
c
(t. t
0
) = (1 + c)
tt
0
is the exponential function for T = Z.
Now consider the initial value problem

o
j (t) = : (t) . (t
0
) =
0
(2.3)
on isolated time scales, where j (t) ,= 0 for all t T
i
. Then the unique solution to
the initial value problem is given by
(t) = c1

(t. t
0
)
0
+
_
t
t
0
c1

(t. o (:))
: (:)
j(:)
:.
This problem was extensively studied in the paper by A. C. Peterson and his
students [14], and it brings us signicant outcomes that are benecial for our
work.
Remark 6 The exponential function c1

(t. t
0
) is given by
c1

(t. t
0
) =

t[t
0
,t)
j (t)
for t _ t
0
; otherwise we have
c1

(t. t
0
) =

t[t,t
0
)
1
j (t)
.
This remark deduces two important conclusions. First, c1

(t. t
0
) is the unique
solution of the homogeneous equation
o
j (t) = 0 with the given initial value
(t
0
) =
0
. Thus we have the recurrence relation
c
o
1

(t. t
0
) = j (t) c1

(t. t
0
) (2.4)
for t _ t
0
. Second, let 0 < j < 1 be a constant number, and for t _ t
0
. let t = t
a
on
time scale T =t
0
. t
1
. .... t
a
. .... Also, let q (t) be an function of t that counts the
12
number of isolated points on the interval [t
0
. t) T. Then the exponential function
becomes
c1

(t. t
0
) =

t[t
0
,t)
j (t) = j
j(t)
.
Thus, we have
lim
to
c1

(t. t
0
) = 0. (2.5)
For further reading and for the proofs, we refer the reader to the book by A. C.
Peterson and M. Bohner [7] .
2.2 Stochastic Calculus
Since we formulate our models on isolated time domains, in this section, we
dene basics of a discrete probability model for our purposes. We start by dening
a sample set . Consider we have a single stock with price o
t
at time t = 1. 2. .... 1.
Then, the set of all possible values, . is given by
= . : . = (o
1
. o
2
. .... o
T
) .
We have to list all possible future prices, i.e., all possible states of the world, to
model uncertainty about the price in the future. The unknown future is just one
of many possible outcomes, called the true state of the world.
Now say T
t
is the information available to investors at time t, which consists
of stock prices before and at time t. Then, for a nite sample space . we dene
the random variable A as a real-valued function given by
A : (. T
t
) R.
Denition 7 A stochastic process is a collection of random variables A (t) .
13
Next we introduce expectation, variance, and covariance of random variables.
Prior to this, we shall give some basic denitions. Dene the probability function
of an outcome . . where is nite, as 1 (.) . 1 (.) is a real number which
describes the likelihood of . occurring. It satises the following properties:
i) 1 (.) _ 0.
ii)

.
1 (.) = 1 () = 1.
Notice that, for a random variable A : R. where is nite, A () also
denotes a nite set given by
A () = r
i

I
i=1
.
We dene the collection of probabilities j
i
of sets
A = r
i
= . : A (.) = r
i

as the probability distribution of A. provided


j
i
= 1 (A = r
i
) =

.:A(.)=a
.
1 (.) .
Denition 8 For the random variable A given on (. T) . and probability 1, ex-
pectation of A with respect to 1 is given by
E(A) = E
1
A =

o|| .
A (.) 1 (.) .
Next, we give properties of an expectation, E(A), for any random variable A.
Remark 9 Assume A and 1 are random variables, and c, , are constants. Then,
we say E is linear, provided
E(cA + ,1 ) = cE(A) + ,E(1 )
14
or
E(cA + ,) = cE(A) + ,.
The covariance of two random variables A and 1 is dened by
Co (A. 1 ) = E[(A E(A)) (1 E(1 ))] = E(A1 ) E(A) E(1 ) .
If two random variables A and 1 are uncorrelated, then
Co (A. 1 ) = 0.
Denition 10 Variance is a measure that describes how far a set of numbers are
spread out from each other. In a probability distribution, it describes how far the
numbers lie from the expectation (or mean).
The variance of A is the covariance of A with itself, i.e.,
\ c: (A) = Co (A. A) .
Then, we say
\ c: (A) = Co (A. A)
= E[(A EA) (A EA)]
= E
_
(A EA)
2

= E
_
A
2

(E[A])
2
.
For further reading, we refer the reader to the book by F. C. Klebaner [13] .
15
CHAPTER 3
DETERMINISTIC DYNAMIC PROGRAMMING
In this chapter, we study the concept of dynamic programming when the deci-
sion maker gives decisions with risk aversion, i.e., decision making under certainty.
For our purposes, we assume T is an isolated time scale with sup T = . In Sec-
tion 3.1, we construct the innite deterministic dynamic sequence problem on T
and introduce the concept of utility function provided with examples. We continue
our work by deriving the corresponding Bellman equation in Section 3.2. In Sec-
tion 3.3, the relation between solutions of the sequence problem and the Bellman
equation is identied through the Bellmans Optimality Principle. Afterwards, in
Section 3.4, we solve an example of optimal growth model with a given logarithmic
utility function in economical nance, and in Section 3.5, we examine the existence
and uniqueness of the solution of the Bellman equation via Contraction Mapping
Theorem.
3.1 Deterministic Dynamic Sequence Problem
Let sup T = and T[0. ) = [0. )
T
. Then, we dene the deterministic
sequence problem on isolated time scale T as
(o1) sup
a(t),c(t)
1
I=0
_
c[0,o)
T
cc1

(:. 0) 1 (r (:) . c (:)) : (3.1)


:.t. r

(t) (t. r(t) . c (t))


r (0) Aqic:.
where
r (t) is the state variable,
16
c (t) is the optimal control or choice variable,
1 (.. .) is a strictly concave, continuous and dierentiable real-valued objec-
tive function,
A is the space of sequences r (t)
o
t=0
that maximizes the sequence problem,
: A A is the correspondence describing the feasibility constraints where
(t. r(t) . c (t)) is the set of constraints that determine the feasible choices
of t and c (t) given r (t) .
Also, we shall talk about the budget constraint. A budget constraint is an
accounting identity that describes the consumption options available to a decision
maker with a limited income (or wealth) to allocate among various goods. It is
important to understand that the budget constraint is an accounting identity, not
a behavioral relationship. A decision maker may well determine the next decision
step in order to nd the best path by considering the budget constraint. Thus, the
budget constraint is a given element of the problem that a decision maker faces.
In this thesis, we dene the budget constraint on any isolated time scales T as
r

(t) (t. r(t) . c (t)) .


which is also known as the rst order dynamic inclusion. That means r

(t) can be
chosen as any of the feasible plans that belongs to (t. r(t) . c (t)) . For instance,
we can choose the budget constraint as one of the following:
i) r

(t) < q (t. r(t) . c (t)), or


ii) r

(t) = q (t. r(t) . c (t)), or


iii)r

(t) + q (t. r(t) . c (t)), etc., where + denes the "relation" between
functions.
17
We refer to the paper by F. M. Atici and D. C. Biles [1] for more detailed
discussion about the existence results on rst order dynamic inclusions on time
scales.
The sequence problem given with (3.1) is a multi-stage problem, and as we
mentioned in the rst chapter it is ecient to use dynamic programming to solve
the problem. The basic idea in dynamic programming is to take the innite time
maximizing problem and somehow break it down into a reasonable number of
subproblems. Then we use the optimal solutions to the smaller subproblems to
nd the optimal solutions to the larger ones. These subproblems can overlap, so
long as there are not too many of them.
3.2 Deterministic Bellman Equation
We dene the deterministic Bellman equation on any isolated time scale T as
_
cc1

(t. 0) \ (t. r(t))


_

+ cc1

(t. 0) 1 (r (t) . c (t)) = 0 (3.2)


where \ (t. r(t)) is the value function.
Remark 11 Equation (3.2) is equivalent to
\ (t. r(t)) = j(t) 1 (r (t) . c (t)) + ,\ (o (t) . r(o (t))) (3.3)
:.t. r

(t) (t. r(t) . c (t)) .


18
Indeed, if we apply the product rule (2.2) on the LHS of the equation (3.2) .
then we have
0 =
_
cc1

(t. 0) \ (t. r(t))


_

+ cc1

(t. 0) 1 (r (t) . c (t))


0 = c

c1

(t. 0) \ (t. r(t)) + c


o
c1

(t. 0) \

(t. r(t))
+cc1

(t. 0) 1 (r (t) . c (t)) .


Then by applying the denition of delta derivative (2.1) . we have
0 = c

c1

(t. 0) \ (t. r(t)) + c


o
c1

(t. 0) \

(t. r(t))
+cc1

(t. 0) 1 (r (t) . c (t))


0 =
, 1
j(t)
cc1

(t. 0) \ (t. r(t)) + ,cc1

(t. 0)
\
o
(t. r(t)) \ (t. r(t))
j(t)
+cc1

(t. 0) 1 (r (t) . c (t))


0 =
1
j(t)
cc1

(t. 0) \ (t. r(t)) +


,
j(t)
cc1

(t. 0) \
o
(t. r(t))
+cc1

(t. 0) 1 (r (t) . c (t))


0 = cc1

(t. 0) [\ (t. r(t)) + ,\


o
(t. r(t)) + j(t) 1 (r (t) . c (t))] .
Since cc1

(t. 0) ,= 0. the last equation above implies


0 = \ (t. r(t)) + ,\
o
(t. r(t)) + j(t) 1 (r (t) . c (t)) .
If we rearrange it, we have
\ (t. r(t)) = j(t) 1 (r (t) . c (t)) + ,\
o
(t. r(t)) .
19
3.3 Bellmans Optimality Principle
In this section, we prove the main results of this thesis that examine the
relation between the optimum of the sequence problem and the solution of the
Bellman equation. Once we obtain the link between these solutions, we can say
a solution to the Bellman equation is also a solution for the sequence problem.
Thus, instead of solving the sequence problem, we solve the Bellman equation.
The following theorem relates the solutions to the Bellman equation to those
of (o1) .
Theorem 12 Assume (o1) attains its supremum at r (t) . c (t)
o
t=0
. Then
\ (t. r(t)) =
_
c[t,o)
T
cc1

(:. t) 1 (r (:) . c (:)) : (3.4)


:.t. r

(:) (:. r(:) . c (:))


is the solution to the Bellman equation with initial condition at t=0.
Proof. It is sucient to show that the value function (3.4) satises the Bellman
equation (3.3) . Then by plugging the value function, \ (t. r(t)), in the RHS of
equation (3.3) . we have
j(t) 1 (r (t) . c (t)) + ,\ (o (t) . r(o (t)))
= j(t) 1 (r (t) . c (t)) + ,
_
c[o(t),o)
T
cc1

(:. o (t)) 1 (r (:) . c (:)) :.


By properties of the exponential function we have
= j(t) 1 (r (t) . c (t)) + ,
_
c[o(t),o)
T
cc1

(:. 0)
cc1

(o (t) . 0)
1 (r (:) . c (:)) :.
20
Then by the recurrence relation (2.4) and the property of exponential function,
that is, c
j
(t. :) c
j
(:. :) = c
j
(t. :) . we have
= j(t) 1 (r (t) . c (t)) +
,
,
_
c[o(t),o)
T
cc1

(:. 0)
cc1

(t. 0)
1 (r (:) . c (:)) :
= j(t) 1 (r (t) . c (t)) +
_
c[o(t),o)
T
cc1

(:. 0)
cc1

(t. 0)
1 (r (:) . c (:)) :
= j(t) 1 (r (t) . c (t)) +
_
c[o(t),o)
T
cc1

(:. t) 1 (r (:) . c (:)) :.


Next in the last equation above, we add and subtract the integral
o(t)
_
t
cc1

(:. t) 1 (r (:) . c (:)) :.


which gives us
= j(t) 1 (r (t) . c (t)) +
_
c[o(t),o)
T
cc1

(:. t) 1 (r (:) . c (:)) :


+
o(t)
_
t
cc1

(:. t) 1 (r (:) . c (:))


o(t)
_
t
cc1

(:. t) 1 (r (:) . c (:)) :


= j(t) 1 (r (t) . c (t))
o(t)
_
t
cc1

(:. t) 1 (r (:) . c (:)) :


+
_
c[t,o)
T
cc1

(:. t) 1 (r (:) . c (:)) :.


Since we are on isolated time scale T, by Theorem 4 and since cc1

(t. t) = 1. we
have
o(t)
_
t
cc1

(:. t) 1 (r (:) . c (:)) : = j(t) cc1

(t. t) 1 (r (t) . c (t))


= j(t) 1 (r (t) . c (t)) .
21
Then this implies
= j(t) 1 (r (t) . c (t))
o(t)
_
t
cc1

(:. t) 1 (r (:) . c (:)) :


+
_
c[t,o)
T
cc1

(:. t) 1 (r (:) . c (:)) :


= j(t) 1 (r (t) . c (t)) j(t) 1 (r (t) . c (t))
+
_
c[t,o)
T
cc1

(:. t) 1 (r (:) . c (:)) :


=
_
c[t,o)
T
cc1

(:. t) 1 (r (:) . c (:)) :


= \ (t. r(t)) .
Before we continue, we shall introduce the Euler equations and the transver-
sality condition.
Euler Equations An Euler equation is an intertemporal version of a rst-order
condition characterizing an optimal choice. In many economic problems, whatever
method you use, such as calculus of variations, optimal control theory or dynamic
programming, part of the solution is typically a solution of the Euler equation. We
dene the Euler equations for our problem as follows.
Let \ (t. r(t)) be the solution of the Bellman equation, and assume (o1) at-
tains its supremum at r
+
(t) . c
+
(t)
o
t=0
. Then, dene the supremum of all solutions
22
of the Bellman equation as
\
+
(t. r(t)) = sup
a(c)
1
s=I
\ (:. r(:))
= sup
a(c)
1
s=I
[j(:) 1 (r (:) . c (:)) + ,\ (o (:) . r(o (:)))] .
Assume r (o (t)) = q (t. r(t) . c (t)) suitable to the budget constraint that is given
with the Bellman equation. Then by dierentiating both sides of the above equa-
tion with respect to the control and state variables, respectively c (t) and r (t), we
obtain the Euler equations of the Bellman equation as
0 = ,\
a
(o (:) . q (:. r
+
(:) . c
+
(:))) q
c
(:. r
+
(:) . c
+
(:))
+j(:) 1
c
(r
+
(:) . c
+
(:)) . (3.5)
and
\
a
(t. r
+
(t)) = ,\
a
(o (:) . q (:. r
+
(:) . c
+
(:))) q
a
(:. r
+
(:) . c
+
(:))
+j(:) 1
a
(r
+
(:) . c
+
(:)) . (3.6)
The Transversality Condition In order to solve the Euler equations for the
optimal solution of the problem, we need boundary conditions. These boundary
conditions are obtained from the transversality condition. The transversality con-
dition states that
lim
To
\
a
(1. r
+
(1)) = 0 (3.7)
if
\
a
(1. r
+
(1)) =
_
c[0,o)
T
cc1

(:. 0) 1
a
(r
+
(:) . c
+
(:)) :.
Under the existence of the transversality condition given with (3.7), the fol-
lowing theorem shows that any sequence satisfying this condition is an optimal
solution for the maximizing sequence problem.
23
Theorem 13 Assume \ (t. r(t)) is the solution of the Bellman equation. Let the
supremum of all solutions of the Bellman equation be
\
+
(t. r(t)) = sup
a(c)
1
s=I
\ (:. r(:)) . (3.8)
Then \
+
(t. r(t)) characterizes the sequence problem (o1) at initial state r (0).
Proof. Assume r
+
(t) . c
+
(t)
o
t=0
attain the supremum in (o1) . It is sucient
to show that the dierence between the objective functions in (o1) evaluated at
r
+
(t) . c
+
(t) and at r (t) . c (t) is nonnegative. Therefore, we start by setting
the dierence as
(o1)
+
(o1) = lim
To
_
c[0,T)
T
cc1

(:. 0) [1 (r
+
(:) . c
+
(:)) 1 (r (:) . c (:))] :.
Since 1 is concave, continuous and dierentiable, we have
(o1)
+
(o1) _ lim
To
_
c[0,T)
T
cc1

(:. 0) [1
a
(r
+
(:) . c
+
(:)) (r
+
(:) r (:))
+ 1
c
(r
+
(:) . c
+
(:)) (c
+
(:) c (:))] :. (3.9)
If we solve the Euler equations given with (3.5) and (3.6), respectively, for 1
c
and
1
a
, we obtain
1
c
(r
+
(:) . c
+
(:)) =
,
j(:)
\
a
(o (:) . q (:. r
+
(:) . c
+
(:))) q
c
(:. r
+
(:) . c
+
(:)) .
and
1
a
(r
+
(:) . c
+
(:)) =
,
j(:)
\
a
(o (:) . q (:. r
+
(:) . c
+
(:))) q
a
(:. r
+
(:) . c
+
(:))
+
1
j(:)
\
a
(t. r
+
(t)) .
Now we substitute 1
c
and 1
a
in (3.9) . Then we have,
= lim
To
_
c[0,T)
T
cc1

(:. 0)
1
j(:)
\
a
(:. r
+
(:)) (r
+
(:) r (:))
,\
a
(o (:) . q (:. r
+
(:) . c
+
(:))) [q
a
(:. r
+
(:) . c
+
(:)) (r
+
(:) r (:))
+q
c
(:. r
+
(:) . c
+
(:)) (c
+
(:) c (:))] :
24
_ lim
To
_
c[0,T)
T
cc1

(:. 0)
1
j(:)
\
a
(:. r
+
(:)) (r
+
(:) r (:))
,\
a
(o (:) . q (:. r
+
(:) . c
+
(:))) [q (:. r
+
(:) . c
+
(:)) q (:. r(:) . c (:))] :.
Since r (o (t)) = q (t. r(t) . c (t)), we have
= lim
To
_
c[0,T)
T
cc1

(:. 0)
1
j(:)
\
a
(:. r
+
(:)) (r
+
(:) r (:))
,\
a
(o (:) . r
+
(o (:))) [r
+
(o (:)) r (o (:))] :.
By plugging
r (o (:)) = r (:) + j(:) r

(:)
in the last expression above, we have
J = lim
To
_
c[0,T)
T
cc1

(:. 0)
1
j(:)
\
a
(:. r
+
(:)) (r
+
(:) r (:))
,\
a
(o (:) . r
+
(o (:)))
_
r
+
(:) + j(:) r
+

(:) r (:) j(:) r

(:)
__
:
= lim
To
_
c[0,T)
T
cc1

(:. 0)
1
j(:)
\
a
(:. r
+
(:)) (r
+
(:) r (:))
,\
a
(o (:) . r
+
(o (:)))
_
(r
+
(:) r (:)) + j(:) (r
+

(:) r

(:))
__
:
= lim
To
_
c[0,T)
T
_
cc1

(:. 0)
1
j(:)
,\
a
(o (:) . r
+
(o (:))) j(:) (r
+

(:) r

(:))
+cc1

(:. 0)
1
j(:)
[\
a
(:. r
+
(:)) ,\
a
(o (:) . r
+
(o (:)))] (r
+
(:) r (:))
_
:.
By using the recurrence relation of the exponential function, i.e., c
o
c1

(t. 0) =
,cc1

(t. 0) . we rewrite the above equation. Then, we have


J = lim
To
_
c[0,T)
T
_
cc1

(o (:) . 0) \
a
(o (:) . r
+
(o (:))) (r
+

(:) r

(:))

_
_
c
o
c1

(:. 0) \
o
a
(:. r
+
(:)) cc1

(:. 0) \
a
(:. r
+
(:))
j(:)
_
_
(r
+
(:) r (:))
_
_
_
:
25
= lim
To
_
c[0,T)
T
cc1

(o (:) . 0) \
a
(o (:) . r
+
(o (:))) (r
+

(:) r

(:)):
lim
To
_
c[0,T)
T
_
cc1

(:. 0) \
a
(:. r
+
(:))
_

(r
+
(:) r (:)) :.
In the rst integral above, we use the integration by parts formula on time scales,
i.e.,
_
, (o (t)) q

(t) t = (,q) (t)


_
,

(t) q (t) t.
Now let
, (o (t)) = cc1

(o (:) . 0) \
a
(o (:) . r
+
(o (:))) and
q

(t) =
_
r
+

(:) r

(:)
_
.
and apply the integration by parts formula. Then we have
J = lim
To
_
c[0,T)
T
cc1

(o (:) . 0) \
a
(o (:) . r
+
(o (:))) (r
+

(:) r

(:)):
lim
To
_
c[0,T)
T
_
cc1

(:. 0) \
a
(:. r
+
(:))
_

(r
+
(:) r (:)) :
= lim
To
cc1

(t. 0) \
a
(t. r
+
(t)) (r
+
(t) r (t))
_
T
0
+ lim
To
_
c[0,T)
T
_
cc1

(:. 0) \
a
(:. r
+
(:))
_

(r
+
(:) r (:)) :
+ lim
To
_
c[0,T)
T

_
cc1

(:. 0) \
a
(:. r
+
(:))
_

(r
+
(:) r (:)) :
= lim
To
_
cc1

(1. 0) \
a
(1. r
+
(1)) (r
+
(1) r (1))
cc1

(0. 0) \
a
(0. r
+
(0)) (r
+
(0) r (0))
_
.
Because r
+
(0) r (0) = 0. we have
J = lim
To
cc1

(1. 0) \
a
(1. r
+
(1)) (r
+
(1) r (1)) .
It then follows from the transversality condition (3.7) that (o1)
+
(o1) _ 0.
which demonstrates the desired result.
26
3.4 An Example in Financial Economics: Optimal Growth
Model
In this section, we solve the social planners problem on isolated time scale T
by the guessing method. In the social planners problem the objective function 1
is assumed to be as l, and it is called utility function. First, we shall give the
denition and the properties of the utility function l.
A utility is a numerical rating assigned to every possible outcome a decision
maker may be faced with. It is often modeled to be aected by consumption of
various goods and services, possession of wealth, and spending of leisure time. Next
we give the mathematical description of the utility function. Let l : (0. ) R
be a strictly increasing, concave and continuously dierentiable function, with
1. l (0) means lim
c0
+
l (c) _
2. l
0
(0) means lim
c0
+
l (c) _
3. l
0
() means lim
co
l
0
(c) = 0.
A function with these properties is called a utility function. Here we give some
examples of the best known utility functions. The rst one is the Cobb-Douglas
utility function
l (r. ) = r
o

b
. c. / 0.
Another utility function is Constant Relative Risk Aversion (CRRA), or power
utility function, given by
l (C) =
_
C
1
1
,o: 0. ,= 1
ln C ,o: . = 1
27
where
1

is the intertemporal substitution elasticity between consumption in any


two periods; i.e., it measures the willingness to substitute consumption between
dierent periods. If is getting smaller, then the household is more willing to
substitute consumption over time. Another utility function is Constant Absolute
Risk Aversion (CARA) which is
l (C) =
1
c
c
(cC)
, c 0.
Lastly, we have the Quadratic utility function, which is given by
l (C) = C
c
2
C
2
. c 0.
For further reading, we refer the reader to the paper by I. Karatzas [12], and the
lecture notes by D. Romer [15].
3.4.1 Solving the Bellman Equation by Guessing Method
As we mentioned in the introduction, trading is not a reasonably periodic and
continuous act. A stock or a merchandise will be purchased in a period of time
sometimes daily or hourly, and sometimes in every minute or second. Thus, there
is not a rule that tells us the frequency of a purchase is periodic or continuous.
Each purchase may be scattered randomly over a period of time. Therefore, we
formulate the problem on isolated time domains.
First, we dene the deterministic utility maximization problem. Assume sup T =
for c|| t T. Then the innite horizon dynamic sequence problem on isolated time
scale T has the form
28
sup
I(o(t))
1
I=0
_
c[0,o)
T
cc1

(:. 0) l (/ (:) . c (:)) : (3.10)


:.t. /

(t) (t. / (t) . c (t))


/ (0) A qic:.
where / (t) is the state variable and c (t) is the control variable. In the social
planners problem, we use CRRA utility function, i.e., l (/ (t) . c (t)) = ln (c (t)) .
Then, we dene the corresponding Bellman equation as
\ (t. / (t)) = j(t) l (/ (t) . c (t)) + ,\ (o (t) . / (o (t))) (3.11)
:.t. /

(t) (t. / (t) . c (t)) .


Our goal is to nd the sequence / (o (t))
o
t=0
which maximizes the given utility
function. The budget constraint for the social planners problem is given as
/ (o (t)) = /
c
(t) c (t) . (3.12)
where 0 < c < 1. First, we need to show the budget constraint is appropriately
chosen. Indeed, we get the desired result by doing elementary algebra and by using
the denition of the -derivative on isolated time scales
/ (o (t)) = /
c
(t) c (t)
/ (o (t)) / (t)
j(t)
=
/
c
(t) / (t) c (t)
j(t)
/

(t)
+
_
/
c
(t) / (t) c (t)
j(t)
_
= (t. / (t) . c (t)) .
Now substitute the utility function l (/ (t) . c (t)) = ln (c (t)) and the budget con-
straint / (o (t)) = /
c
(t)c (t) in the Bellman equation. Then, we have the Bellman
equation as
29
\ (t. / (t)) = j(t) ln (/
c
(t) / (o (t))) + ,\ (o (t) . / (o (t))) . (3.13)
By guessing, we say the solution of this Bellman equation, \ (t. / (t)), has the
form
\ (t. / (t)) = (t) ln (/ (t)) + 1(t) . (3.14)
Theorem 14 The value function \ (t. / (t)) is equal to
\ (t. / (t)) = cc 1
oc
1

(t. 0)
_
_
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
_
_
ln (/ (t))
+c 1
c
1

(t. 0)
_
1(0)
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
__
:
_
. (3.15)
for the sequence / (o (t))
o
t=0
. which maximizes (o1).
Proof. Because the value function \ (t. / (t)), given with (3.14) . is solution of the
Bellman equation (3.3), it satises equation (3.13) . Therefore, we have
(t) ln (/ (t)) + 1(t) = j(t) ln (/
c
(t) / (o (t))) (3.16)
+ , [(o (t)) ln (/ (o (t))) + 1(o (t))] .
Then by dierentiating both sides of equation (3.16) with respect to / (o (t)), we
have
J
J/ (o (t))
(t) ln (/ (t)) + 1(t) =
J
J/ (o (t))
j(t) ln (/
c
(t) / (o (t)))
+, [(o (t)) ln (/ (o (t))) + 1(o (t))]
30
0 = j(t)
_
1
/
c
(t) / (o (t))
_
+ ,(o (t))
1
/ (o (t))
We substitute the budget constraint in (3.12) back, and then we nd the control
variable c (t) as
j(t)
c (t)
=
,(o (t))
/
c
(t) c (t)
j(t) /
c
(t) = [j(t) + ,(o (t))] c (t)
c (t) =
j(t)
j(t) + ,(o (t))
/
c
(t) .
Hence, we nd the state variable / (o (t)) as
/ (o (t)) = /
c
(t) c (t)
= /
c
(t)
j(t)
j(t) + ,(o (t))
/
c
(t)
=
,(o (t))
j(t) + ,(o (t))
/
c
(t) .
Now we plug these values in equation (3.16) . Then we have
(t) ln (/ (t)) + 1(t) = j(t) ln
_
j(t)
j(t) + ,(o (t))
/
c
(t)
_
+,(o (t)) ln
_
,(o (t))
j(t) + ,(o (t))
/
c
(t)
_
+,1(o (t))
= cj(t) ln (/ (t)) + j(t) ln
_
j(t)
j(t) + ,(o (t))
_
+c,(o (t)) ln (/ (t))
+,(o (t)) ln
_
,(o (t))
j(t) + ,(o (t))
_
+,1(o (t)) . (3.17)
Next we have to nd the coecients, (t) and 1(t), in the value function. From
equivalence of RHS and LHS of equation (3.17), we nd (t) and 1(t) as follows.
31
For the coecient (t), we have
(t) = cj(t) + c,(o (t)) .
By doing elementary algebra, we obtain
1
c,
(t) =
1
,
j(t) + (o (t)) .
(o (t))
1
c,
(t) =
1
,
j(t) .
(o (t))
c 1
oc
1

(o (t) . 0)

1
co
(t)
c 1
oc
1

(o (t) . 0)
=

1
o
j(t)
c 1
oc
1

(o (t) . 0)
. (3.18)
Then, by the recurrence relation (2.4) of the exponential function, we have
c 1
oc
1

(o (t) . 0) =
1
c,
c 1
oc
1

(t. 0) .
When we plug this equality in equation (3.18), we obtain
(o (t))
c 1
oc
1

(o (t) . 0)

(t)
c 1
oc
1

(t. 0)
=
cj(t)
c 1
oc
1

(t. 0)
.
We divide the entire equation by j(t), and then we obtain the rst order linear
dynamic equation on time scales as
(o(t))
c
1
oc
1

(o(t),0)

(t)
c
1
oc
1

(t,0)
j(t)
=
c
c 1
oc
1

(t. 0)
.
which is equivalent to
_
_
_
(t)
c 1
oc
1

(t. 0)
_
_
_

=
c
c 1
oc
1

(t. 0)
= cc

1
oc
1

!
(t. 0) .
By the denition of "circle minus" in time scales, we have
c

1
oc
1

!
(t. 0) = coc1

(t. 0) .
32
This implies
_
_
_
(t)
c 1
oc
1

(t. 0)
_
_
_

= ccoc1

(t. 0) . (3.19)
Equation (3.19) is a rst order linear dynamic equation on isolated time scale T.
Then by integrating both sides of equation (3.19) on the domain
T[0. t) = [0. t)
T
.
we have
_
c[0,t)
T
_
_
_
(:)
c 1
oc
1

(:. 0)
_
_
_

: = c
_
c[0,t)
T
coc1

(:. 0) :.
We can rewrite the Cauchy integral in terms of the sum operator on isolated time
scales, and then for c
j
(t. t) = 1. \t T, we have
(t)
c 1
oc
1

(t. 0)

(0)
c 1
oc
1

(0. 0)
= c

c[0,t)
T
coc1

(:. 0) j(:)
(t) = c 1
oc
1

(t. 0) (0) cc 1
oc
1

(t. 0)

c[0,t)
T
coc1

(:. 0) j(:) .
Notice that for T = Z, we have j(t) = 1. This implies
(t) = c 1
oc
1
(t. 0) (0) cc 1
oc
1
(t. 0)

c[0,t)
T
coc1

(:. 0) .
Because the exponential function is equivalent to
c 1
oc
1
(t. 0) =
_
1 +
1
c,
1
_
t0
=
_
1
c,
_
t
for T = Z, we have
(t) =
_
1
c,
_
t
(0) c
_
1
c,
_
t t1

c=0
(c,)
c
where c, is constant and 0 < c, < 1. Then the sum of this product in the last
equation above is
(t) =
1
(c,)
t
(0) c
1
(c,)
t
_
(c,)
c
c, 1
__
t
0
33
=
1
(c,)
t
(0) c
1
(c,)
t
_
(c,)
t
c, 1

1
c, 1
_
=
1
(c,)
t
(0)
c
c, 1
+
c
(c,)
t
(c, 1)
.
J. Hassler [10] formulated (t) as constant in discrete time as
=
c
1 c,
.
As a result of this, we assume the initial value (0) as
(0) =
c
1 c,
.
Hence, we nd (t) as
(t) =
c
1 c,
c 1
oc
1

(t. 0) cc 1
oc
1

(t. 0)

c[0,t)
T
coc1

(:. 0) j(:) .
which is
(t) = cc 1
oc
1

(t. 0)
_
_
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
_
_
(3.20)
for any isolated time scale T.
Next we solve 1(t) in the same manner we solved (t). For the coecient
1(t), we have
1(t) = j(t) ln
_
j(t)
j(t) + ,(o (t))
_
+,(o (t)) ln
_
,(o (t))
j(t) + ,(o (t))
_
+,1(o (t)) .
By rewriting the equation, we have
1
,
1(t) =
1
,
_
j(t) ln
_
j(t)
j(t) + ,(o (t))
_
+,(o (t)) ln
_
,(o (t))
j(t) + ,(o (t))
__
+ 1(o (t))
1(o (t))
1
,
1(t) =
j(t)
,
ln
_
j(t)
j(t) + ,(o (t))
_
(o (t)) ln
_
,(o (t))
j(t) + ,(o (t))
_
.
34
By dividing both sides of the equation with integrating factor c 1
c
1

(o (t) . 0), we
have
1(o (t))
c 1
c
1

(o (t) . 0)

1
o
1(t)
c 1
c
1

(o (t) . 0)
=
j(t)
o
ln
_
j(t)
j(t)+o(o(t))
_
c 1
c
1

(o (t) . 0)

(o (t)) ln
_
o(o(t))
j(t)+o(o(t))
_
c 1
c
1

(o (t) . 0)
.
From recurrence relation of exponential function, i.e.,
c 1
c
1

(o (t) . 0) =
1
,
c 1
c
1

(t. 0) .
we have
1(o (t))
c 1
c
1

(o (t) . 0)

1(t)
c 1
c
1

(t. 0)
=
j(t) ln
_
j(t)
j(t)+o(o(t))
_
c 1
c
1

(t. 0)

,(o (t)) ln
_
o(o(t))
j(t)+o(o(t))
_
c 1
c
1

(t. 0)
.
To constitute the -derivative of 1(t) on the LHS, we divide the entire equation
by j(t). Thus, we have
1(o(t))
c
1
c
1

(o(t),0)

1(t)
c
1
c
1

(t,0)
j(t)
= c

1
c
1

!
(t. 0) ln
_
j(t)
j(t) + ,(o (t))
_
c

1
c
1

!
(t. 0)
,(o (t))
j(t)
ln
_
,(o (t))
j(t) + ,(o (t))
_
.
which is equivalent to
_
_
_
1(t)
c 1
c
1

(t. 0)
_
_
_

= c

1
c
1

!
(t. 0)
_
ln
_
j(t)
j(t) + ,(o (t))
_
+
,(o (t))
j(t)
ln
_
,(o (t))
j(t) + ,(o (t))
__
.
35
By the denition of "circle minus" in time scales, we have
c

1
c
1

!
(t. 0) = cc1

(t. 0) .
This implies
_
_
_
1(t)
c 1
c
1

(t. 0)
_
_
_

= cc1

(t. 0)
_
ln
_
j(t)
j(t) + ,(o (t))
_
(3.21)
+
,(o (t))
j(t)
ln
_
,(o (t))
j(t) + ,(o (t))
__
.
Equation (3.21) is a rst order linear dynamic equation on isolated time scale T,
and by integrating both sides on the domain [0. t) . we have
_
c[0,t)
T
_
_
_
1(:)
c 1
c
1

(:. 0)
_
_
_

: =
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
__
:
1(t)
c 1
c
1

(t. 0)

1(0)
c 1
c
1

(0. 0)
=
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
__
:.
Hence, we nd 1(t) as
1(t) = c 1
c
1

(t. 0)
_
1(0)
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
__
:
_
.
where
(o (t)) =
1
,
j(t) +
1
,
c 1
oc
1

(t. 0)
_
_
1
(1 c,)


c[0,t)
T
coc1

(:. 0) j(:)
_
_
.
36
Notice that for T = Z, we have j(t) = 1. Since (t) is constant for T = Z, we
have (o (t)) = (t) =
c
1co
. This implies
1(t) = c1
c
1
(t. 0)
_
1(0)
_
c[0,t)
T
c
o1
(:. 0)
_
ln
_
1
1 + ,
c
1co
_
+ ,
c
1 c,
ln
_
,
c
1co
1 + ,
c
1co
__
:
_
.
Because the exponential function is equivalent to
c1
c
1
(t. 0) =
_
1 +
1
,
1
_
t0
=
_
1
,
_
t
and
c
o1
(t. 0) = (1 + , 1)
t0
= ,
t
for T = Z, we have from the denition of sum operator on time scales
1(t) =
_
1
,
_
t
_

_
1(0)
_
c[0,t)
T
,
c
_
ln (1 c,) + ,
c
1 c,
ln (c,)
_
:
_

_
=
_
1
,
_
t
1(0)
_
1
,
_
t t1

c=0
,
c
_
ln (1 c,) + ,
c
1 c,
ln (c,)
_
=
_
1
,
_
t
1(0)
_
1
,
_
t
_
ln (1 c,) + ,
c
1 c,
ln (c,)
_
t1

c=0
,
c
.
where , is constant and 0 < , < 1. By calculating the sum in the last equation
above, we have
1(t) =
_
1
,
_
t
1(0)
_
1
,
_
t
_
ln (1 c,) + ,
c
1 c,
ln (c,)
_ _
,
c
, 1
__
t
0
=
_
1
,
_
t
1(0)
_
1
,
_
t
_
ln (1 c,) + ,
c
1 c,
ln (c,)
_ _
,
t
, 1

1
, 1
_
=
_
1
,
_
t
1(0)
_
1
,
_
t
_
1 ,
t
_
_
ln (1 c,)
1 ,
+
c, ln (c,)
(1 ,) (1 c,)
_
.
37
J. Hassler [10] formulated 1(t) as constant in discrete time as
1 =
ln (1 c,)
1 ,
+
c, ln (c,)
(1 ,) (1 c,)
.
As a result of this, we assume the initial value 1(0) to be
1(0) = 2
_
ln (1 c,)
1 ,
+
c, ln (c,)
(1 ,) (1 c,)
_
.
Hence, we nd 1(t) is
1(t) = c 1
c
1

(t. 0)
_
1(0)
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
(3.22)
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
__
:
_
.
where
(o (t)) =
1
,
j(t) +
1
,
c 1
oc
1

(t. 0)
_
_
1
(1 c,)


c[0,t)
T
coc1

(:. 0) j(:)
_
_
.
and
1(0) = 2
_
ln (1 c,)
1 ,
+
c, ln (c,)
(1 ,) (1 c,)
_
.
Therefore, the value function \ (t) given with equation (3.14), is found as
\ (t. / (t)) = cc 1
oc
1

(t. 0)
_
_
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
_
_
ln (/ (t))
+c 1
c
1

(t. 0)
_
1(0)
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
__
:
_
.
Lemma 15 If 0 < j(t) _ 1. then (t) is a positive valued function of t over
[0. )
T
.
38
Proof. In equation (3.20), let 1 be the dierence given in the brackets. We know
0 < c < 1. and for j =
1
oc
1
j

+
. the sign of the exponential function c 1
oc
1

(t. 0)
is positive (see [7], Theorem 2.48(i)). In order to show (t) is positive, we need
to show
1 =
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
is positive. By rewriting the second term in the above dierence, we have
1 =
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
=
1
1 c,

_
j(0) + coc1

(o (:) . 0) j(o (:)) + coc1

_
o
2
(:) . 0
_
j
_
o
2
(:)
_
+ + coc1

_
o
j(t)
(:) . 0
_
j
_
o
j(t)
(:)
_
_
=
1
1 c,

_
j(0) + (c,) j(o (:)) + (c,)
2
j
_
o
2
(:)
_
+ + (c,)
j(t)
j
_
o
j(t)
(:)
_
_
.
Then by using the geometric series, we have
1 =
1
1 c,

_
j(0) + (c,) j(o (:)) + (c,)
2
j
_
o
2
(:)
_
+ + (c,)
j(t)
j
_
o
j(t)
(:)
_
_
=
o

i=0
(c,)
i

_
_
j(t)

i=0
(c,)
i
j
_
o
i
(0)
_
_
_
.
For 0 < j(t) _ 1, we know
o

i=0
(c,)
i

j(t)

i=0
(c,)
i
_
j(t)

i=0
(c,)
i
j
_
o
i
(0)
_
.
This implies
1 =
o

i=0
(c,)
i

_
_
j(t)

i=0
(c,)
i
j
_
o
i
(0)
_
_
_
0.
Thus, for isolated time scale T with 0 < j(t) _ 1, (t) is positive over [0. )
T
.
39
From (3.20) we nd (o (t)) as

1
,
j(t) +
1
,
c 1
oc
1

(t. 0)
_
_
1
(1 c,)


c[0,t)
T
coc1

(:. 0) j(:)
_
_
.
Indeed, plugging in o (t) for each t in equation (3.20) gives us
(o (t)) =
c
1 c,
c 1
oc
1

(o (t) . 0)
cc 1
oc
1

(o (t) . 0)

c[0,o(t))
T
coc1

(:. 0) j(:) .
By recurrence relation of the exponential function, we have
(o (t)) =
c
1 c,
_
1
c,
_
c 1
oc
1

(t. 0)
c
_
1
c,
_
c 1
oc
1

(t. 0)

c[0,t)
T
coc1

(:. 0) j(:)
c
_
1
c,
_
c 1
oc
1

(t. 0)

c[t,o(t))
T
coc1

(:. 0) j(:) .
This implies that
(o (t)) =
1
, (1 c,)
c 1
oc
1

(t. 0)

1
,
c 1
oc
1

(t. 0)

c[0,t)
T
coc1

(:. 0) j(:)

1
,
c 1
oc
1

(t. 0) coc1

(t. 0) j(t) .
Since
c 1
oc
1

(t. 0) coc1

(t. 0) = c

1
oc
1

!
(
oc1

)
(t. 0) = c
0
(t. 0) = 1.
we have
(o (t)) =
1
, (1 c,)
c 1
oc
1

(t. 0)
1
,
j(t)

1
,
c 1
oc
1

(t. 0)

c[0,t)
T
coc1

(:. 0) j(:) .
40
and that is
(o (t)) =
1
,
j(t) +
1
,
c 1
oc
1

(t. 0)
_
_
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
_
_
.
By having the solution of the Bellman equation, we can nd the maximizing
sequence / (o (t))
o
t=0
by obtaining the state variable / (t). First, let , (t) be equal
to
, (t) =
,(o (t))
j(t) + ,(o (t))
.
Then, we have
/ (o (t)) =
,(o (t))
j(t) + ,(o (t))
/
c
(t)
/ (o (t)) = , (t) /
c
(t) . (3.23)
Now, by iterating equation (3.23) for the initial point t = 0, we have
/ (o (0)) = , (0) /
c
(0)
/
_
o
2
(t)
_
= , (o (0)) /
c
(o (0))
= , (o (0)) [, (0) /
c
(0)]
c
= , (o (0)) ,
c
(0) /
c
2
(0)
/
_
o
3
(0)
_
= ,
_
o
2
(0)
_
/
c
_
o
2
(0)
_
= ,
_
o
2
(0)
_
_
, (o (0)) ,
c
(0) /
c
2
(0)
_
c
= ,
_
o
2
(0)
_
,
c
(o (0)) ,
c
2
(0) /
c
3
(0) .
.
.
.
and for t = o
a
(0) [0. )
T
.
/ (t) = / (o
a
(0)) =
_
a1

)=0
,
_
o
)
(0)
_
c
(n1)
_
/
c
n
(0) . (3.24)
41
Lastly, we nd the optimal control variable c (t) . for the maximizing sequence
/ (o (t))
o
t=0
. By plugging (o (t)) in the equation of c (t), we have
c (t) =
j(t)
j(t) + ,(o (t))
/
c
(t)
=
j(t) /
c
(t)
j(t) + ,
_

1
o
j(t) +
1
o
c 1
oc
1

(t. 0)
_
1
(1co)

c[0,t)
T
coc1

(:. 0) j(:)
_
_.
which is
c (t) =
j(t)
c 1
oc
1

(t. 0)
_
1
(1co)

c[0,t)
T
coc1

(:. 0) j(:)
_/
c
(t) (3.25)
for the state variable / (t) in (3.24) .
3.5 Existence of Solution of the Bellman Equation
We can solve the Bellman equation either by guessing or iteration of the value
function. Since we formulate the utility maximization problem on innite horizon,
it is not practical to use iteration of the value function method. Therefore, in the
previous section, we solved the utility maximization problem by guessing method.
We guessed the value function, \ (t. / (t)) . has the form (t) ln (/ (t))+1(t) . and
we said it satises the Bellman equation
\ (t. / (t)) = 1 (t. l) + ,\ (o (t) . / (o (t))) . t T
where 1 (t. l) = j(t) l (/ (t) . c (t)) . Therefore, we need to show the existence and
uniqueness of the solution of the Bellman equation. Before we show the existence
and uniqueness of this solution, we shall give some denitions about metric spaces,
norms and contraction mappings.
42
First, assume that | is the metric space of continuous, real valued, and bounded
functions with metric j dened by
j (,. q) =| , q | (,. q |) .
We associate with each , | its supremum norm
| , |= sup
)|
[, (r)[ .
Completeness of | is proved by W. Rudin ([16] . Thm. 7.15). Now we can dene
a contraction mapping, 1, on the complete metric space |.
Next, we dene the utility function l by
l : (0. ) R.
Note that the utility function is assumed to be strictly increasing, strictly concave
and continuously dierentiable. For t T, we dene the state variable / (t), and
the control variable c (t) by
/. c : T R.
We assume that / (t) and c (t) are bounded. Then, we dene the value function by
\ : T R
(t,I(t))

|
\ (t,I(t))
.
Now we need to show that the value function we obtained in the previous section
is the unique solution of the Bellman equation. To achieve our goal, we use the
Contraction Mapping Theorem, which deals with operators that map a given met-
ric space into itself. Prior to that, we must show necessary assumptions of the
contraction mapping theorem hold. First we dene the mapping 1 as
1\ = 1 (.. l) + ,\
o
. (3.26)
Then, by the following lemma we prove that 1 is invariant.
43
Lemma 16 1 is a mapping on the complete metric space of bounded and con-
tinuous functions. Assume that for all \. t T, we have \ |. Then we say
1\ |, i.e., 1 is invariant.
Proof. By the denition of utility function, we know l is a continuous function.
Also, we know the control and state variables, respectively c (t) and / (t) . are
continuous and bounded on T. The image of bounded functions under a continuous
function is also bounded. Hence, we say l (/ (t) . c (t)) is bounded. Additionally,
both terms on the RHS of equation (3.26) are bounded. Therefore we say 1\ is
bounded. The continuity of \ follows from continuity of utility function. Thus,
1\ |; i.e., the mapping 1 is invariant (1 (\ ) |).
Next, we need to show that 1 is a contraction mapping. First we give the
basics of a contraction.
Denition 17 Suppose f is a real-valued function on (. ) . If , (r) = r. then
we say x is the xed point of f.
Denition 18 For a metric space |, let 1 : | | be a mapping. 1 is said to
be a contraction on | if for each r. |
j (1r. 1) _ cj (r. ) . 0 _ c < 1.
We refer to the books by W. Rudin [16] and R. R. Goldberg [9] for further
reading on the subject. Now, we state the contraction mapping theorem.
Theorem 19 (Contraction Mapping Theorem) A contraction mapping T, de-
ned on a complete metric space, has a unique xed point; that is, there is a unique
element, r |, such that 1r = r.
44
Proof. First of all we need to show 1 : | | is a contraction. Therefore,
choose any \
1
. \
2
|. Then, we have
j (1\
1
. 1\
2
) = | 1\
1
1\
2
|
= sup
t
sup
I
[1\
1
(/ (t)) 1\
2
(/ (t))[
= sup
t
sup
I
[j(t) l (/ (t) . c (t)) + ,\
1
(o (t) . / (o (t)))
j(t) l (/ (t) . c (t)) ,\
2
(o (t) . / (o (t)))[
= , sup
t,I
[\
1
(o (t) . / (o (t))) \
2
(o (t) . / (o (t)))[ .
Because we nd the supremum of the dierence in the absolute value for all t and
/s, we can write
j (1\
1
. 1\
2
) = , sup
t,I
[\
1
(t. / (t)) \
2
(t. / (t))[
= , | \
1
\
2
|.
where 0 < , < 1. Hence, we say 1 : | | is a contraction on the complete
metric space |, and by the Contraction Mapping Theorem, \ is the unique xed
point such that
\ = 1\ = 1 (.. l) + ,\
o
.
45
CHAPTER 4
STOCHASTIC DYNAMIC PROGRAMMING
In this chapter, we formulate the stochastic sequence problem and the corre-
sponding stochastic Bellman equation. Stochastic dynamic programming reects
the behavior of the decision maker without risk aversion; i.e., decision making un-
der uncertainty. Uncertainty is a state of having limited knowledge where it is
impossible to exactly describe the existing state or future outcome. There may be
more than one possible outcome. Therefore, for the measurement of uncertainty,
we have a set of possible states or outcomes where probabilities are assigned to each
possible state or outcome. Moreover, risk is a state of uncertainty where some pos-
sible outcomes have an undesired eect or signicant loss. Deterministic problems
are formulated with known parameters, yet real-world problems almost invariably
include some unknown parameters. As a conclusion, we say the stochastic case is
more realistic, and it gives more accurate results. For our purposes, we assume T
is an isolated time scale with sup T = . In Section 4.1, we formulate the innite
stochastic dynamic sequence problem on T. Then we dene the corresponding
stochastic Bellman equation in Section 4.2. In Section 4.3, we solve the social
planners problem, but this time it is given with a budget constraint that includes
a random variable. Thus, the random variable also appears in the solution. Since
it is not possible to nd a solution explicitly that involves a random variable, we
examine its distribution in Section 4.4.
46
4.1 Stochastic Dynamic Sequence Problem
Let sup T = and T[0. ) = [0. )
T
. Then, we dene the stochastic se-
quence problem on isolated time scale T as
(o1) sup
a(t),:(t),c(t)
1
I=0
E
_
c[0,o)
T
cc1

(:. 0) 1 (r (:) . c (:)) : (4.1)


:.t. r

(t) (t. r(t) . . (t) . c (t))


r (0) A qic:.
where . (t)
o
t=0
is a sequence of independently and identically distributed random
state variables. Deviations of the random variable . (t) are uncorrelated in dierent
time periods; i.e., Co (. (t
i
) . . (t
)
)) = 0, for all i ,=j. E denotes the mathematical
expectation of the objective function 1. As we dened in the deterministic model,
r (t) is the state variable, and c(t) is the control variable. Our goal is to nd the
optimal sequence r (t) . . (t) . c(t)
o
t=0
that maximizes the expected utility in the
sequence problem.
In the stochastic model, we dene the budget constraint on any isolated time
scale T, as
r

(t) (t. r(t) . . (t) . c (t)) .


which is also called the rst order dynamic inclusion. That means future consump-
tion is uncertain. As we explained in the deterministic model, r

(t) can be chosen


as any of the feasible plans that belong to (t. r(t) . . (t) . c (t)) .
47
4.2 Stochastic Bellman Equation
In section 3.2, we dened the deterministic Bellman equation as
_
cc1

(t. 0) \ (t. r(t))


_

+ cc1

(t. 0) 1 (r (t) . c (t)) = 0.


where \ (t. r(t)) is the solution of the Bellman equation and showed that it is
equivalent to the equation
\ (t. r(t)) = j(t) 1 (r (t) . c (t)) + ,\ (o (t) . r(o (t))) .
which is given in (3.3) . In the stochastic model, the solution of the Bellman equa-
tion involves the random variable . (t), i.e., \ (t. r(t) . . (t)) . Then we dene the
stochastic Bellman equation as
\ (t. r(t) . . (t)) = j(t) 1 (r (t) . c (t)) + ,E[\ (o (t) . r(o (t)) . . (o (t)))] (4.2)
r

(t) (t. r(t) . . (t) . c (t)) .


4.3 Stochastic Optimal Growth Model
In this section, we solve the stochastic social planners problem by guessing
method. For the same reasons we stated in chapter 2, we formulate the stochastic
model on any isolated time scale T with sup T = . As we did in the determin-
istic case, we assume the objective function 1 is the utility function l. In this
case, the optimal control variable c (t) and state variable r (t) are all random vari-
ables because both of them are now functions of . (t), which is a random variable.
Therefore, after we obtain the optimal sequence r (t) . . (t) . c(t)
o
t=0
. we describe
its distributions.
48
4.3.1 Solving the Stochastic Bellman Equation by Guess-
ing Method
Assume T is an isolated time scale with sup T = , for c|| t T. Then the
stochastic social planners problem has the form
sup
I(o(t))
1
I=0
E
_
c[0,o)
T
cc1

(:. 0) l (/ (:) . c (:)) : (4.3)


:.t. /

(t) (t. / (t) . . (t) . c (t))


/ (0) A qic:.
where / (t) is the state variable, . (t) is the random state variable, and c (t) is the
control variable. We assume the random variables are lognormally distributed such
that
ln . (t) ~ `
_
j
+
. o
2
_
.
In other words, ln . (t) is normally distributed with mean j
+
and variance o
2
.
Therefore, E[ln . (t
i
)] = j
+
for every t
i
[0. )
T
. Additionally, because / (t) and
c (t) are functions of the random variable . (t) . they also have the log normal
distribution. We dene the corresponding stochastic Bellman equation as
\ (t. / (t) . . (t)) = j(t) l (/ (t) . c (t)) + ,E[\ (o (t) . / (o (t)) . . (o (t)))] (4.4)
/

(t) (t. / (t) . . (t) . c (t)) .


As in the deterministic case, we use CRRA utility function, i.e., l (/ (t) . c (t)) =
ln (c (t)) . Our goal is to nd the sequence / (o (t))
o
t=0
and the distribution of
the state variable / (t) that will maximize (o1). The budget constraint for the
stochastic case is given as
/ (o (t)) = . (t) /
c
(t) c (t) .
49
where 0 < c < 1. From the denition of dc:ictic on isolated time scales,
we obtain
/ (o (t)) = . (t) /
c
(t) c (t)
/ (o (t)) / (t)
j(t)
=
. (t) /
c
(t) / (t) c (t)
j(t)
/

(t)
+
_
. (t) /
c
(t) / (t) c (t)
j(t)
_
= (t. / (t) . . (t) . c (t)) .
which shows that the budget constraint is appropriately chosen.
By substituting the utility function l (/ (t) . c (t)) = ln (c (t)) and the budget
constraint / (o (t)) = . (t) /
c
(t) c (t) in the stochastic Bellman equation given in
(4.4), we have
\ (t. / (t) . . (t)) = j(t) ln (. (t) /
c
(t) / (o (t)))
+,E[\ (o (t) . / (o (t)) . . (o (t)))] . (4.5)
By guessing, we say the solution of the stochastic Bellman equation, \ (t. / (t) . . (t)),
has the form
\ (t. / (t) . . (t)) = (t) ln (/ (t)) + 1(t) ln (. (t)) + 1(t) . (4.6)
Theorem 20 The value function \ (t. / (t)) is equal to
\ (t. / (t) . . (t)) = cc 1
oc
1

(t. 0)
_
_
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
_
_
ln (/ (t))
+c 1
oc
1

(t. 0)
_
_
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
_
_
ln (. (t))
+c 1
c
1

(t. 0)
_
1(0)
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
_
+
,j
+
1(o (:))
j(:)
_
:
_
(4.7)
50
for the sequence / (o (t))
o
t=0
. which maximizes (o1).
Proof. Because this \ (t. / (t) . . (t)) . given with (4.6), is the solution of the
stochastic Bellman equation (4.4), it satises equation (4.5) . Therefore, we have
(t) ln (/ (t)) + 1(t) ln (. (t)) + 1(t) = j(t) ln (. (t) /
c
(t) / (o (t)))
+,E(o (t)) ln (/ (o (t)))
+1(o (t)) ln (. (o (t))) + 1(o (t)) .
Because E is linear we have
(t) ln (/ (t)) + 1(t) ln (. (t)) + 1(t) = j(t) ln (. (t) /
c
(t) / (o (t)))
+,E[(o (t)) ln (/ (o (t)))]
+,E[1(o (t)) ln (. (o (t)))]
+,E[1(o (t))] . (4.8)
and since all other terms except ln (. (o (t))) are nonrandom in the RHS of (4.8),
we have
(t) ln (/ (t)) + 1(t) ln (. (t)) + 1(t) = j(t) ln (. (t) /
c
(t) / (o (t)))
+,(o (t)) ln (/ (o (t)))
+,1(o (t)) E[ln (. (o (t)))]
+,1(o (t)) . (4.9)
Since E[ln . (t
i
)] = j
+
for every t
i
[0. )
T
. we rewrite (4.9) as
(t) ln (/ (t)) + 1(t) ln (. (t)) + 1(t) = j(t) ln (. (t) /
c
(t) / (o (t)))
+,(o (t)) ln (/ (o (t)))
+,j
+
1(o (t)) + ,1(o (t)) . (4.10)
51
Now to nd the rst order conditions we dierentiate both sides of equation (4.10)
with respect to / (o (t)) . Hence, we have
J
J/ (o (t))
(t) ln (/ (t)) + 1(t) ln (. (t)) + 1(t)
=
J
J/ (o (t))
j(t) ln (. (t) /
c
(t) / (o (t))) + ,(o (t)) ln (/ (o (t)))
+,j
+
1(o (t)) + ,1(o (t))
0 = j(t)
1
. (t) /
c
(t) / (o (t))
+ ,(o (t))
1
/ (o (t))
.
We substitute the budget constraint back, and then we nd the control variable
c (t) as
j(t)
c (t)
=
,(o (t))
. (t) /
c
(t) c (t)
j(t) . (t) /
c
(t) = [j(t) + ,(o (t))] c (t)
c (t) =
j(t)
j(t) + ,(o (t))
. (t) /
c
(t) .
By plugging c (t) in the budget constraint, we obtain the state variable / (t) as
/ (o (t)) = . (t) /
c
(t) c (t)
= . (t) /
c
(t)
j(t)
j(t) + ,(o (t))
. (t) /
c
(t)
=
,(o (t))
j(t) + ,(o (t))
. (t) /
c
(t) .
When we plug these values in equation (4.10) . we have
(t) ln (/ (t)) + 1(t) ln (. (t)) + 1(t) = j(t) ln
_
j(t)
j(t) + ,(o (t))
. (t) /
c
(t)
_
+,(o (t)) ln
_
,(o (t))
j(t) + ,(o (t))
. (t) /
c
(t)
_
+,j
+
1(o (t)) + ,1(o (t))
52
(t) ln (/ (t)) + 1(t) ln (. (t)) + 1(t) = cj(t) ln (/ (t)) + j(t) ln (. (t))
+j(t) ln
_
j(t)
j(t) + ,(o (t))
_
+c,(o (t)) ln (/ (t)) + ,(o (t)) ln (. (t))
+,(o (t)) ln
_
,(o (t))
j(t) + ,(o (t))
_
+,j
+
1(o (t)) + ,1(o (t)) . (4.11)
Next we have to nd the coecients, (t) . 1(t) and 1(t), in the value
function. From equivalence of RHS and LHS of equation (4.11), we nd (t) .
1(t) and 1(t) as follows. For the coecient (t) . we have
(t) = cj(t) + c,(o (t)) .
In chapter 3, we solved this equation for (t), and now similarly nd (t) as in
the equation (3.20) . which is
(t) = cc 1
oc
1

(t. 0)
_
_
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
_
_
(4.12)
for any isolated time scale T. Next we nd 1(t) from
1(t) = j(t) + ,(o (t)) .
Since
1(t) = j(t) + ,(o (t)) =
(t)
c
.
we nd 1(t) as
1(t) = c 1
oc
1

(t. 0)
_
_
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
_
_
(4.13)
by dividing both sides of equation (4.12) by c. Lastly, we nd 1(t) by solving
1(t) = j(t) ln
_
j(t)
j(t) + ,(o (t))
_
+ ,(o (t)) ln
_
,(o (t))
j(t) + ,(o (t))
_
+,1(o (t)) + ,j
+
1(o (t)) .
53
from equation (4.11) . By rewriting the equation, we have
1
,
1(t) =
1
,
_
j(t) ln
_
j(t)
j(t) + ,(o (t))
_
+,(o (t)) ln
_
,(o (t))
j(t) + ,(o (t))
_
+ ,j
+
1(o (t))
_
+ 1(o (t)) .
1(o (t))
1
,
1(t) =
j(t)
,
ln
_
j(t)
j(t) + ,(o (t))
_
(o (t)) ln
_
,(o (t))
j(t) + ,(o (t))
_
j
+
1(o (t)) .
By dividing both sides of the equation with integrating factor c 1
c
1

(o (t) . 0), we
have
1(o (t))
c 1
c
1

(o (t) . 0)

1
o
1(t)
c 1
c
1

(o (t) . 0)
=
j(t)
o
ln
_
j(t)
j(t)+o(o(t))
_
c 1
c
1

(o (t) . 0)

(o (t)) ln
_
o(o(t))
j(t)+o(o(t))
_
c 1
c
1

(o (t) . 0)

j
+
1(o (t))
c 1
c
1

(o (t) . 0)
.
From recurrence relation of exponential function, i.e.,
c 1
c
1

(o (t) . 0) =
1
,
c 1
c
1

(t. 0) .
we have
1(o (t))
c 1
c
1

(o (t) . 0)

1(t)
c 1
c
1

(t. 0)
=
j(t) ln
_
j(t)
j(t)+o(o(t))
_
c 1
c
1

(t. 0)

,(o (t)) ln
_
o(o(t))
j(t)+o(o(t))
_
c 1
c
1

(t. 0)

,j
+
1(o (t))
c 1
c
1

(t. 0)
.
54
To constitute the -derivative of 1(t) on the LHS, we divide the whole equation
by j(t). Thus, we have
1(o(t))
c
1
c
1

(o(t),0)

1(t)
c
1
c
1

(t,0)
j(t)
= c

1
c
1

!
(t. 0) ln
_
j(t)
j(t) + ,(o (t))
_
c

1
c
1

!
(t. 0)
,(o (t))
j(t)
ln
_
,(o (t))
j(t) + ,(o (t))
_
c

1
c
1

!
(t. 0)
,j
+
1(o (t))
j(t)
.
which is equivalent to
_
_
_
1(t)
c 1
c
1

(t. 0)
_
_
_

= c

1
c
1

!
(t. 0)
_
ln
_
j(t)
j(t) + ,(o (t))
_
+
,(o (t))
j(t)
ln
_
,(o (t))
j(t) + ,(o (t))
_
+
,j
+
1(o (t))
j(t)
_
.
By the denition of "circle minus" on time scales, we have
c

1
c
1

!
(t. 0) = cc1

(t. 0) .
This implies
_
_
_
1(t)
c 1
c
1

(t. 0)
_
_
_

= cc1

(t. 0)
_
ln
_
j(t)
j(t) + ,(o (t))
_
(4.14)
+
,(o (t))
j(t)
ln
_
,(o (t))
j(t) + ,(o (t))
_
+
,j
+
1(o (t))
j(t)
_
.
Equation (4.14) is a rst order linear dynamic equation on isolated time scale T,
and by integrating both sides on the domain [0. t)
T
. we have
55
_
c[0,t)
T
_
_
_
1(:)
c 1
c
1

(:. 0)
_
_
_

: =
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
_
+
,j
+
1(o (:))
j(:)
_
:.
1(t)
c 1
c
1

(t. 0)

1(0)
c 1
c
1

(0. 0)
=
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
_
+
,j
+
1(o (:))
j(:)
_
:.
Hence, we nd 1(t) as
1(t) = c 1
c
1

(t. 0)
_
1(0)
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
_
+
,j
+
1(o (:))
j(:)
_
:
_
. (4.15)
where
(o (t)) =
1
,
j(t) +
1
,
c 1
oc
1

(t. 0)
_
_
1
(1 c,)


c[0,t)
T
coc1

(:. 0) j(:)
_
_
.
and
1(o (t)) =
(o (t))
c
=
1
c,
j(t) +
1
c,
c 1
oc
1

(t. 0)
_
_
1
(1 c,)


c[0,t)
T
coc1

(:. 0) j(:)
_
_
.
56
Therefore, the value function \ (t) . given with equation (4.6), is found as
\ (t. / (t) . . (t)) = cc 1
oc
1

(t. 0)
_
_
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
_
_
ln (/ (t))
+c 1
oc
1

(t. 0)
_
_
1
1 c,


c[0,t)
T
coc1

(:. 0) j(:)
_
_
ln (. (t))
+c 1
c
1

(t. 0)
_
1(0)
_
c[0,t)
T
cc1

(:. 0)
_
ln
_
j(:)
j(:) + ,(o (:))
_
+
,(o (:))
j(:)
ln
_
,(o (:))
j(:) + ,(o (:))
_
+
,j
+
1(o (:))
j(:)
_
:
_
for the sequence / (o (t))
o
t=0
. which maximizes (o1).
4.3.2 Distributions of the Optimal Solution
We are searching for the optimal state variable / (t) which is a function of the
random state variables . (t) . Because we cannot obtain these optimal solutions ex-
plicitly, we shall characterize their distributions. We assume ln . (t) ~ ` (j
+
. o
2
) .
and deviations of the random variable . (t) are uncorrelated in dierent time pe-
riods; i.e., Co (. (t
i
) . . (t
)
)) = 0, for all i ,=j. Now we need to nd the mean and
variance of ln (/ (t)).
First, let
/
t
= t
i
: t
i
[0. t] and t
i
left-scattered and ` (/) = :.
Then, by setting
, (t) =
,(o (t))
j(t) + ,(o (t))
.
57
we rewrite
/ (o (t)) =
,(o (t))
j(t) + ,(o (t))
. (t) /
c
(t)
as
/ (o (t)) = , (t) . (t) /
c
(t) . (4.16)
Now, by applying natural logarithm to both sides of equation (4.16) . we have
ln (/ (o (t))) = ln (, (t) . (t) /
c
(t))
ln (/ (o (t))) = ln (, (t) . (t)) + cln (/ (t)) .
Let ln (/ (t)) = n(t) . Then we have
n(o (t)) = ln (, (t) . (t)) + cn(t) .
By doing elementary algebra, we obtain
n(o (t)) cn (t) = ln (, (t) . (t)) .
and by dividing both sides of the equation with integrating factor co1

(o (t) . 0) .
the last equation above becomes
n(o (t))
co1

(o (t) . 0)

cn (t)
co1

(o (t) . 0)
=
ln (, (t) . (t))
co1

(o (t) . 0)
.
Then, by the recurrence relation of exponential function, i.e.,
co1

(o (t) . 0) = cco1

(t. 0) .
we have
n(o (t))
co1

(o (t) . 0)

n(t)
co1

(t. 0)
=
ln (, (t) . (t))
cco1

(t. 0)
.
We divide the whole equation by j(t), and then we obtain the rst order linear
dynamic equation on time scales as
&(o(t))
c
o1

(o(t),0)

&(t)
c
o1

(t,0)
j(t)
=
ln (, (t) . (t))
cj(t) co1

(t. 0)
.
58
which is equivalent to
_
n(t)
co1

(t. 0)
_

=
ln (, (t) . (t))
cj(t) co1

(t. 0)
. (4.17)
Equation (4.17) is a rst order linear dynamic equation on isolated time scale T.
Then by integrating both sides on the domain T[0. t) = [0. t)
T
. we have
_
c[0,t)
T
_
n(:)
co1

(:. 0)
_

: =
_
c[0,t)
T
ln (, (:) . (:))
cj(:) co1

(:. 0)
:
_
c[0,t)
T
_
n(:)
co1

(:. 0)
_

: =
_
c[0,t)
T
_
ln (, (:))
cj(:) co1

(:. 0)
+
ln (. (:))
cj(:) co1

(:. 0)
_
:
n(t)
co1

(t. 0)

n(0)
co1

(0. 0)
=
_
c[0,t)
T
_
_
ln
_
o(o(c))
j(c)+o(o(c))
_
cj(:) co1

(:. 0)
+
ln (. (:))
cj(:) co1

(:. 0)
_
_
:.
Since (t) = cj(t) + c,(o (t)) . that is,
(t)
c
= j(t) + ,(o (t)) . we have
n(t) = co1

(t. 0)
_

_
n(0) +
1
c
_
c[0,t)
T
_
_
ln
_
co(o(c))
(c)
_
j(:) co1

(:. 0)
+
ln (. (:))
j(:) co1

(:. 0)
_
_
:
_

_
.
Hence, we nd n(t) as
n(t) = co1

(t. 0) n(0) +
ln (c,)
c
co1

(t. 0)
_
c[0,t)
T
1
j(:) co1

(:. 0)
:
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln
_
(o(c))
(c)
_
j(:) co1

(:. 0)
:
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln (. (:))
j(:) co1

(:. 0)
:. (4.18)
We can integrate the rst integral in equation (4.18) . To do so, we use the denition
of exponential function on time scales. We know
c

j
(t. 0) = j (t) c
j
(t. 0) .
59
This implies
c

(
o1

)
(t. 0) =
_
c 1
j(t)
_
c
(
o1

)
(t. 0) . (4.19)
Since
_
c1
j(t)
_
=
1c
cj(t)
from the denition of "circle minus" and c
(
o1

)
(t. 0) =
1
c
o1

(t,0)
from the properties of the exponential function, we rewrite the equation
(4.19) as
c

1o
o
(t. 0) =
_
1 c
cj(t)
_
1
co1

(t. 0)
_
c
1 c
_
c

1o
o
(t. 0) =
1
j(t) co1

(t. 0)
. (4.20)
By integrating both sides of equation (4.20) on the domain T[0. t) = [0. t)
T
. we
have
c
1 c
_
c[0,t)
T
c

1o
o
(:. 0) : =
_
c[0,t)
T
1
j(:) co1

(:. 0)
:
c
1 c
_
c1o
o
(t. 0) c1o
o
(0. 0)
_
=
_
c[0,t)
T
1
j(:) co1

(:. 0)
:.
which is equivalent to
c
1 c
c1o
o
(t. 0)
c
1 c
=
_
c[0,t)
T
1
j(:) co1

(:. 0)
:. (4.21)
Then we interchange equation (4.21) with the rst integral in (4.18) . and we obtain
n(t) = co1

(t. 0) n(0) +
ln (c,)
c
co1

(t. 0)
_
c
1 c
c1o
o
(t. 0)
c
1 c
_
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln
_
(o(c))
(c)
_
j(:) co1

(:. 0)
:
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln (. (:))
j(:) co1

(:. 0)
:
60
= co1

(t. 0) n(0) +
ln (c,)
1 c
co1

(t. 0) c1o
o
(t. 0)
ln (c,)
1 c
co1

(t. 0)
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln
_
(o(c))
(c)
_
j(:) co1

(:. 0)
:
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln (. (:))
j(:) co1

(:. 0)
:.
Since
co1

(t. 0) c1o
o
(t. 0) = c
(
o1

)(
1o
o
)
(t. 0) = c
0
(t. 0) = 1.
we have
n(t) = co1

(t. 0) n(0) +
ln (c,)
1 c

ln (c,)
1 c
co1

(t. 0)
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln
_
(o(c))
(c)
_
j(:) co1

(:. 0)
:
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln (. (:))
j(:) co1

(:. 0)
:.
Thus, we nd ln (/ (t)) as
ln (/ (t)) = co1

(t. 0)
_
ln (/ (0))
ln (c,)
1 c
_
+
ln (c,)
1 c
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln
_
(o(c))
(c)
_
j(:) co1

(:. 0)
:
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln (. (:))
j(:) co1

(:. 0)
:. (4.22)
for ln (/ (t)) = n(t) .
We stated that the state variable / (t) has the log normal distribution. There-
fore, we need to nd the expected value (or mean) and the variance of / (t).
61
Expected value of the state variable k(t):
We take the expected value of both sides of equation (4.22), i.e.,
E[ln (/ (t))] = E
_
co1

(t. 0)
_
ln (/ (0))
ln (c,)
1 c
_
+
ln (c,)
1 c
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln
_
(o(c))
(c)
_
j(:) co1

(:. 0)
:
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln (. (:))
j(:) co1

(:. 0)
:
_

_
.
Then, by using the properties of expectation and the fact that E[ln . (t
i
)] = j
+
for
every t
i
[0. )
T
. we have
E[ln (/ (t))] = co1

(t. 0)
_
ln (/ (0))
ln (c,)
1 c
_
+
ln (c,)
1 c
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln
_
(o(c))
(c)
_
j(:) co1

(:. 0)
:
+
j
+
c
co1

(t. 0)
_
c[0,t)
T
1
j(:) co1

(:. 0)
:. (4.23)
By plugging equation (4.21) in (4.23) . we obtain
E[ln (/ (t))] = co1

(t. 0)
_
ln (/ (0))
ln (c,)
1 c
_
+
ln (c,)
1 c
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln
_
(o(c))
(c)
_
j(:) co1

(:. 0)
:
+
j
+
c
co1

(t. 0)
_
c
1 c
c1o
o
(t. 0)
c
1 c
_
.
Since
co1

(t. 0) c1o
o
(t. 0) = c
(
o1

)(
1o
o
)
(t. 0) = c
0
(t. 0) = 1.
62
we nd E[ln (/ (t))] as
E[ln (/ (t))] = co1

(t. 0)
_
ln (/ (0))
ln (c,)
1 c
_
+
ln (c,)
1 c
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln
_
(o(c))
(c)
_
j(:) co1

(:. 0)
:
+
j
+
1 c
co1

(t. 0) c1o
o
(t. 0)
j
+
1 c
co1

(t. 0)
= co1

(t. 0)
_
ln (/ (0))
ln (c,)
1 c

j
+
1 c
_
+
ln (c,)
1 c
+
j
+
1 c
+
1
c
co1

(t. 0)
_
c[0,t)
T
ln
_
(o(c))
(c)
_
j(:) co1

(:. 0)
:. (4.24)
However, we have to nd the expected value (or mean) as we move in time. To
do so, we have to take the limit of equation (4.24) as t . Therefore, we have
to show the integral in equation (4.24) is a nite number. First we check the
proportion
(o(t))
(t)
:
(o (t))
(t)
=

1
co
j(t) +
1
co
c 1
oc
1

(t. 0)
_
1
(1co)

c[0,t)
T
coc1

(:. 0) j(:)
_
cc 1
oc
1

(t. 0)
_
1
1co

c[0,t)
T
coc1

(:. 0) j(:)
_
=
1
o
c 1
oc
1

(t. 0)
_
1
(1co)
j(t) coc1

(t. 0)

c[0,t)
T
coc1

(:. 0) j(:)
_
cc 1
oc
1

(t. 0)
_
1
1co

c[0,t)
T
coc1

(:. 0) j(:)
_
=
1
c,
_
1
j(t) coc1

(t. 0)
1
1co

c[0,t)
T
coc1

(:. 0) j(:)
_
. (4.25)
For the existence of the term ln
_
(o(t))
(t)
_
, we should have
(o(t))
(t)
0. We already
know 0 <
1
co
< 1. Thus, it is sucient to check the sign of the expression in the
63
brackets in equation (4.25) . We set
1 = 1
j(t) coc1

(t. 0)
1
1co

c[0,t)
T
coc1

(:. 0) j(:)
.
and we want it to be positive, i.e.,
1 = 1
j(t) coc1

(t. 0)
1
1co

c[0,t)
T
coc1

(:. 0) j(:)
0.
Indeed, by Lemma 15 in chapter 2, we know
o

i=0
(c,)
i

_
_
j(t)

i=0
(c,)
i
j
_
o
i
(0)
_
_
_
0. (4.26)
for isolated time scale T with 0 < j(t) _ 1. From the property of the exponential
function given with Remark 6 in chapter 2, we have

c[t,o(t))
T
(c,)
i
j
_
o
i
(0)
_
= (c,)
t
j
_
o
t
(0)
_
= coc1

(t. 0) j(t) .
which is positive. We add this positive term to both sides of inequality (4.26) . and
we obtain
o

i=0
(c,)
i


c[0,t)
T
(c,)
i
j
_
o
i
(0)
_
coc1

(t. 0) j(t) . (4.27)


Then, we divide both sides of inequality (4.27) with
o

i=0
(c,)
i


c[0,t)
T
(c,)
i
j
_
o
i
(0)
_
0;
i.e., we have
1
coc1

(t. 0) j(t)

o
i=0
(c,)
i

c[0,t)
T
(c,)
i
j(o
i
(0))
1
coc1

(t. 0) j(t)

o
i=0
(c,)
i

c[0,t)
T
(c,)
i
j(o
i
(0))
0.
Hence, we nd 1 as positive; i.e.,
1 = 1
j(t) coc1

(t. 0)
1
1co

c[0,t)
T
coc1

(:. 0) j(:)
= 1
coc1

(t. 0) j(t)

o
i=0
(c,)
i

c[0,t)
T
(c,)
i
j(o
i
(0))
0.
64
Finally, this property implies that,
1
j(t) coc1

(t. 0)
1
1co

c[0,t)
T
coc1

(:. 0) j(:)
.
which means
(o (t))
(t)
=
1
c,
_
1
j(t) coc1

(t. 0)
1
1co

c[0,t)
T
coc1

(:. 0) j(:)
_
<
1
c,
;
i.e., ln
_
(o(t))
(t)
_
exists and is nite. Thus, the limit of (4.24) exists as t . Also
from equation (2.5) . we have
lim
to
co1

(t. 0) = 0.
Hence, as t . the mean is
E[ln (/ (t))] =
ln (c,)
1 c
+
j
+
1 c
.
Variance of the state variable k(t):
Next, we shall nd the variance of the state variable / (t) . From the denition
of variance we have
\ c: [ln (/ (t))] = E
_
(ln (/ (t)) E[ln (/ (t))])
2

= E
_

_
_
_
_
1
c
co1

(t. 0)
_
c[0,t)
T
ln (. (:))
j(:) co1

(:. 0)
:
+
j
+
1 c
co1

(t. 0)
j
+
1 c
_
2
_
.
By multiplying equation (4.21) with nonzero expression
j

c
co1

(t. 0) and using


the properties of the exponential function, we have

j
+
c
co1

(t. 0)
_
_
_
_
c[0,t)
T
1
j(:) co1

(:. 0)
:
_
_
_
=
j
+
c
co1

(t. 0)
c
1 c
_
c1o
o
(t. 0) 1
_
=
j
+
1 c
co1

(t. 0)
j
+
1 c
. (4.28)
65
We substitute (4.28) in the variance, and obtain
\ c: [ln (/ (t))] = E
_
(ln (/ (t)) E[ln (/ (t))])
2

= E
_

_
_
_
_
1
c
co1

(t. 0)
_
c[0,t)
T
ln (. (:))
j(:) co1

(:. 0)
:

j
+
c
co1

(t. 0)
_
c[0,t)
T
1
j(:) co1

(:. 0)
:
_
_
_
2
_

_
.
Then since E[ln . (t
i
)] = j
+
for every t
i
[0. )
T
. we have
\ c: [ln (/ (t))] = E
_

_
_
_
_
1
c
co1

(t. 0)
_
c[0,t)
T
[ln (. (:)) j
+
]
j(:) co1

(:. 0)
:
_
_
_
2
_

_
.
By expanding this and using properties of the expectation and the exponential
function, we have
\ c: [ln (/ (t))] = E
_

_
_
_
_
1
c
co1

(t. 0)
_
c[0,t)
T
[ln (. (:)) j
+
]
j(:) co1

(:. 0)
:
_
_
_
2
_

_
=
1
c
2
c
2
o1

(t. 0) E
_

_
_
_
_
_
c[0,t)
T
[ln (. (:)) j
+
]
j(:) co1

(:. 0)
:
_
_
_
2
_

_
=
1
c
2
c
2
o1

(t. 0)
_

_
_
c=v[0,t)
T
E[ln (. (:)) j
+
]
2
j
2
(:) c
2
o1

(:. 0)
:
+
_
c,=v
E[ln (. (:)) j
+
] E[ln (. (:)) j
+
]
j
2
(:) c
2
o1

(:. 0)
:
_

_
.
where c
2
o1

(t. 0) = c
o
2
1

(t. 0) . Since \ c: [ln (. (t


i
))] = E[ln (. (:)) j
+
]
2
= o
2
for
every t
i
[0. )
T
. and Co (. (t
i
) . . (t
)
)) = 0, for all i ,=j, we obtain the variance
of state variable / (t) as
\ c: [ln (/ (t))] =
o
2
c
2
c
o
2
1

(t. 0)
_
c[0,t)
T
1
j
2
(:) c
o
2
1

(:. 0)
:. (4.29)
66
Since
c

o
2
1

(t. 0) =
_
c
2
1
j
_
c

o
2
1

(t. 0) .
then we have
c

o
2
1

(t. 0) =
1 c
2
c
2
j(t)
c

o
2
1

(t. 0)
c
2
1 c
2
c

o
2
1

(t. 0) =
1
j(t) c
o
2
1

(t. 0)
. (4.30)
Then we substitute the expression in equation (4.30) in the integral in equation
(4.29); that is,
\ c: [ln (/ (t))] =
o
2
c
2
c
o
2
1

(t. 0)
_
c[0,t)
T
1
j(:)
c
2
1 c
2
c

o
2
1

(:. 0) :
=
o
2
1 c
2
c
o
2
1

(t. 0)
_
c[0,t)
T
1
j(:)
c

o
2
1

(:. 0) :. (4.31)
Thus, we obtain the variance of the optimal state variable / (t) as in equation
(4.31) .
Example For T = /Z, we have j(t) = /. Then the variance given with equation
(4.31) becomes
\ c: [ln (/ (t))] =
o
2
1 c
2
c
o
2
1
I
(t. 0)
_
c[0,t)
T
1
/
c

o
2
1
I

(:. 0) :. (4.32)
By the denition of "circle minus" on time scales, we have
c

o
2
1
I

(t. 0) = c

1o
2
o
2
I
(t. 0) .
We substitute this in equation (4.32), and we obtain
\ c: [ln (/ (t))] =
o
2
1 c
2
c
o
2
1
I
(t. 0)
1
/
_
c[0,t)
T
c

1o
2
o
2
I
(:. 0) :. (4.33)
67
Because the exponential function in equation (4.33) is equivalent to
c
o
2
1
I
(t. 0) =
_
1 +
c
2
1
/
/
_I0
I
= c
2I
I
for T =hZ, we have
\ c: [ln (/ (t))] =
o
2
1 c
2
c
2I
I
1
/
_
c[0,t)
T
c

1o
2
o
2
I
(:. 0) :
=
o
2
1 c
2
c
2I
I
1
/
_
c
1o
2
o
2
I
(:. 0)
__
t
0
=
o
2
1 c
2
c
2I
I
1
/
_
c
1o
2
o
2
I
(t. 0) c
1o
2
o
2
I
(0. 0)
_
=
o
2
1 c
2
c
2I
I
1
/
_
c
1o
2
o
2
I
(t. 0) 1
_
.
Also for T =hZ, we have
c
1o
2
o
2
I
(t. 0) =
_
1 +
1 c
2
c
2
/
/
_I0
I
=
_
1
c
_2I
I
.
Thus, we have
\ c: [ln (/ (t))] =
o
2
1 c
2
c
2I
I
1
/
_
_
1
c
_2I
I
1
_
=
o
2
1 c
2
c
2I
I
1
/
_
1
c
_2I
I

o
2
1 c
2
c
2I
I
1
/
=
o
2
/(1 c
2
)

o
2
/(1 c
2
)
c
2I
I
.
Hence, for 0 < c < 1. the variance is
\ c: [ln (/ (t))] =
o
2
/(1 c
2
)
for T =hZ, and it is
\ c: [ln (/ (t))] =
o
2
1 c
2
for T = Z, as t .
68
CHAPTER 5
CONCLUSION AND FUTURE WORK
Dynamic programming is one of the methods included in the theory of dynamic
optimization. The concept of dynamic programming is useful in multistage deci-
sion processes where the result of a decision at one stage aects the decision to
be made in the succeeding stage. Therefore, if one is interested in a long term
eect of the decisions that have been made, rather than an immediate eect, then
one has become involved in the concept of dynamic programming. The method
has advantages, such as enabling the decision maker to formulate the problem on
innite horizon in both deterministic and stochastic cases with exibility on the
constraints. Therefore, these advantages have made the dynamic programming an
eective and useful mathematical principle in optimization theory.
Since Richard Bellmans invention of dynamic programming in the 1950s,
economists and mathematicians have formulated and solved a huge variety of se-
quential decision making problems both in deterministic and stochastic cases; ei-
ther nite or innite time horizon. Yet, absence of conditions, which determine the
nature of trading as periodic or continuous, forces us to reformulate the decision
making problem on non-periodic time domains. Thus, we developed the concept of
dynamic programming on non-periodic time domains, which is known as isolated
time scales. We formulated the sequence problem on isolated time scales for both
deterministic and stochastic cases, and then we derived the corresponding Bell-
man equations, respectively. We developed the Bellmans Principle of Optimality
on isolated time scales, and we proved the theory which unies and generates the
existence results. Then we formulated and solved the deterministic optimal growth
model, that is, the social planners problem, by using time scales calculus. This
69
case gave us the optimal solution with risk aversion, which is the optimal solution
under certainty. Because we used guessing method to nd the solution of the Bell-
man equation, we showed the existence and uniqueness of this solution by using
the Contraction Mapping Theorem.
While deterministic problems have the property that the result of any decision
is known to the decision maker before the decision is made, stochastic problems
specify the probability of each of the various outcomes of the decision. Thus,
the stochastic case gives more accurate results. Therefore, we formulated and
stated the optimal solutions of the stochastic social planners problem. Because
these optimal solutions involved random variables which make explicit solutions
impossible, we examined the distributions of these solutions.
In this study, we established necessary theory of dynamic programming on
isolated time scales. For future work, we will examine an application of this theory
in the real world. Then, we will make a comparison between deterministic and
stochastic models.
70
BIBLIOGRAPHY
[1] F. M. Atici, D. C. Biles, "First Order Dynamic Inclusions on Time Scales",
Journal of Mathematical Analysis and Applications, 292 No.1 (2004) 222-237.
[2] F. M. Atici, D. C. Biles, A. Lebedinsky, "An Application of Time Scales to
Economics", Mathematical and Computer Modelling, 43 (2006) 718726.
[3] F. M. Atici, D. C. Biles, A. Lebedinsky, "A Utility Maximization Problem
on Multiple Time Scales", International Journal of Dynamical Systems and
Dierential Equations, 3 No.1-2 (2011) 38-47.
[4] R. E. Bellman, "On the Theory of Dynamic Programming", Proceedings of
the National Academy of Sciences of the United States of America, 38 No.8
(1952) 716-719.
[5] R. E. Bellman, Dynamic Programming, Sixth Edition, Princeton University
Press, New Jersey, 1957.
[6] P. Bergin, "Lecture Notes on Dynamic Programming", In-
troduction to Dynamic Macroeconomic Analysis lecture notes,
http://www.econ.ucdavis.edu/faculty/bergin/ECON200e/lec200e.pdf,
Spring 1998.
[7] M. Bohner, A. Peterson, Dynamic Equations on Time Scales; An Introduction
with Applications, Birkhuser, Boston, 2001.
[8] S. Dreyfus, "Richard Bellman on the Birth of Dynamic Programming", Op-
erations Research, 50 No. 1 (2002) 48-51.
[9] R. R. Goldberg, Methods of Real Analysis, Blaisdell Publishing Company,
1964.
[10] J. Hassler, "Dynamic Optimization in Discrete Time", Mathematics II lec-
tures, http://hassler-j.iies.su.se/Courses/matfuii/1999/lects199Bellman.pdf,
November 22, 1999.
[11] M. I. Kamien, N. L. Schwartz, Dynamic Optimization: The Calculus of Vari-
ations and Optimal Control in Economics and Management, Second Edition,
Elsevier North Holland, 1992.
71
[12] I. Karatzas, "Applications of Stochastic Calculus in Financial Economics",
Distinguished Lecture Series on Stochastic Calculus, University of Maryland,
1987.
[13] F. C. Klebaner, Introduction to Stochastic Calculus with Applications, 2nd
Edition, Imperial College Press, 2005.
[14] E. Merrell, R. Ruger, J. Severs, "First-Order Recurrence Relations on Isolated
Time Scales", PanAmerican Mathematical Journal, 14 No.1 (2004) 83-104.
[15] D. Romer, Advanced Macroeconomics, Third Edition, McGraw-Hill, 1996.
[16] W. Rudin, Principles of Mathematical Analysis, McGraw-Hill Book Company,
1976.
[17] J. Seiert, S. Sanyal, D.C. Wunsch, Hamilton-Jacobi-Bellman Equations and
Approximate Dynamic Programming on Time Scales, IEEE Transactions on
Systems, Man, and Cybernetics-Part B: Cybernetics, 38 No.4 (2008) 918923.
[18] N. Stokey, R. Lucas, Recursive Methods in Economic Dynamics, Harvard,
1989.
[19] Z. Zhan, W. Wei, H. Xu, Hamilton-Jacobi-Bellman equations on time scales,
Mathematical and Computer Modelling, 49 (2009) 20192028.
72

You might also like