You are on page 1of 11

WATER RESOURCES RESEARCH, VOL. 35, NO.

4, PAGES 1219-1229, APRIL 1999

Optimal decompositionof covariance matrices


for multivariate stochastic models in hydrology
Demetris Koutsoyiannis
Department of Water Resources,Facultyof Civil Engineering,National TechnicalUniversity,Athens

Abstract. A new method is proposedfor decomposingcovariancematricesthat appearin


the parameterestimationphaseof all multivariatestochasticmodelsin hydrology.This
methodappliesnot only to positivedefinitecovariancematrices(as do the typical
methodsof the literature) but to indefinitematrices,too, that often appearin stochastic
hydrology.It is also appropriatefor preservingthe skewnesscoefficientsof the model
variablesas it accountsfor the resultingcoefficientsof skewness
of the auxiliary(noise)
variablesusedby the stochasticmodel, giventhat the latter coefficientsare controlledby
the decomposedmatrix. The methodis formulatedin an optimizationframeworkwith the
objectivefunctionbeingcomposedof three componentsaimingat (1) complete
preservationof the variancesof variables,(2) optimalapproximationof the covariances
of
variables,in the casethat completepreservationis not feasibledue to inconsistent(i.e.,
not positivedefinite)structureof the covariancematrix,and (3) preservationof the
skewnesscoefficientsof the model variablesby keepingthe skewnessof the auxiliary
variablesas low as possible.Analyticalexpressions
of the derivativesof this objective
functionare derived,which allow the developmentof an effectivenonlinearoptimization
algorithmusingthe steepestdescentor the conjugategradient methods.The method is
illustratedand exploredthrougha real world application,which indicatesa very
satisfactoryperformanceof the method.
1.

Introduction

In the parameterestimationphaseof all multivariatemodels


of stochastichydrology,we confrontthe problem of decomposinga covariancematrixc into anothermatrixb suchasc =

bb (alsoknownas"takingthe squareroot"of c). It is well


known that this problem has an infinite number of solutions
whenc is positivedefiniteandno (real) solutionotherwise.In
the era of the first developmentof the multivariatestochastic
modelsin hydrology,Matalasand Wallis[1971] (see alsoMatalasand Wallis[1976])first pointedout that multivariatestochasticmodelsin hydrologymay be inconsistentin the sense
that their covariancematrix c, estimatedby the historicalhydrologicaldata, may not be positivedefinite;in that casethe
"squareroot matrix"b, whichis necessary
to expressthe model
itself, doesnot exist.As will be explainedin section2, such
inconsistentmatricesare encounteredeither when only a subset of the covariancesamongrelatedvariablesare explicitly
modeledor whenmissingdataaffectthe parameterestimation.
In real world applications,suchsituationsare not infrequent
(seeGrygierand Stedinger
[1990,p. 31] and section5 below).
Interestingly,Slack[1973]in his articlewith the emphatictitle
"I would if I could"showedthat multivariatesynthetichydrologicalseriesmay lead to an inconsistent
(i.e., indefinite)covariance matrix c, even if those serieswere the output of a
consistentand simplestochastic
model (suchas the bivariate
MarkovJanmodel).
Hydrologistshavenot hesitatedto provideapproximatesolutionsin casesof inconsistentmatrices.Various approximate
techniqueswere presentedin severalworks which could be

titled "I couldif I should."Amongthemare thoseproposedby


Mejia and Milltin [1974] (also quotedby Bras and RodriguezIturbe [1985, p. 98]), Grygierand Stedinger[1990, pp. 31-33],
Koutsoyiannis
[1992], and Koutsoyiannis
and Manetas[1996].
These techniquesare rather empiricaland result in more or
lesssignificantalterationof the covariancematrix c. A more
theoreticalapproachwas devisedby Rasmussen
et al. [1996],
who suggesteddeterminationof b by numericalconstrained
optimization,where the objectivefunctionis the squareddifference between the observed covariance of the data and that

producedby the estimatedmodel, and the constraintis the


requirementfor a positivedefinitematrix c. This methodwas
formulatedfor a two-sitecase(i.e., for a matrixc with size2 x
2); in this caseit is convenientto expressthe positivedefinitenessconstraintanalytically,but it maybe difficultto expandto
higher-dimensional
problems.
Seekingfor a more generalizedtheoreticalbasisfor remedying inconsistentcovariancematrices,we will assumethat
whateverthe dimensionalityof the problemis, there existsan
optimal matrix b that resultsin the least significantalteration

of c, or the bestapproximation
of the originalc. Then,the

Copyright1999by the American GeophysicalUnion.

questionsarise of (1) how we can objectivelyquantify the


degreeof approximationand (2) howwe can searchsystematicallyto find the optimal solution.
In the caseof a consistent(i.e., positivedefinite)covariance
matrixc thereexisttwowell-knownalgorithmsfor derivingtwo
different solutionsb (see, for example,Bras and RodriguezIturbe[1985,p. 96]). The first and simpleralgorithm,knownas
triangularor Choleskydecomposition,
resultsin a lower triangular b. The second,known as singularvalue decomposition,
resultsin a full b usingthe eigenvaluesand eigenvectors
of c.

Paper number 1998WR900093.

However, since it is known that there existsan infinite number

0043-1397/99/1998WR900093509.00

of solutionsb, the question ariseswhether there exists an


1219

1220

KOUTSOYIANNIS:

OPTIMAL

DECOMPOSITION

OF COVARIANCE

MATRICES

optimalsolution,possiblydifferentfrom thesetwo.The answer unit variance,mutuallyindependent,and alsoindependentof


to this questionwouldbe negativeif we had no other concern Z (oftencallednoise,innovation,
or auxiliary
variables),
anda
apart from the determinationof b. In that casethe computa- and b are matrices of coefficients with sizes n x rn and rn X
tionallysimplerlowertriangularb is the mostpreferable.How- rn, respectively.(In this paper we use uppercaseletters for
ever, as we will see, there are caseswhere other concernsmust

random

be consideredand the answerto this questionbecomespositive. Then, the subsequent


questionsare (1) howwe canquantify thisoptimalityand (2) howwe cansearchsystematically
to
find the optimal solution.Again, we have about the same
questionsfor consistentmatricesasin the caseof inconsistent
matrices.This enablesa singletreatmentof the decomposition
problemfor consistentor inconsistentmatrices.
Another frequentproblemin multivariatestochastic
models
is encountered
whenwe attemptto preservethe coefficients
of
skewnessof the model variables.The auxiliaryvariablesasso-

or coefficients;also we use bold letters for matrices or vectors

variables

and lowercase

letters for values of variables

and regularletters for scalars.)Generally,the elementsof Y


representspecifichydrologic
processes
(rainfall,runoff,etc.)at
somelocationsspecifiedby an indexl = 1, ---, n, at a specific
time period,whereasthe elementsof Z representthe sameor
other related hydrologicprocesses
at the sameor other locations,generallyat a previoustime period.The variablesY, Z,
and are not necessarilystandardizedto havezero mean and
unit variance,althoughthis is the casein most commonmodels;however, hasby definitionunit variance.Also, the variciated with the stochasticmodel, also known as noise variables ablesY, Z, and V are not necessarilyGaussian.
or innovationvariables,may potentiallyhaveveryhigh coeffiFor example,in the caseof the stationaryA_R(1)modelwe
cientsof skewness
that are practicallyunachievable.
This was setY -= Xt andZ = Xt-, whereXt represents
hydrologic
first reportedby Todini[1980],who encountereda coefficient variablesat n sitesat the time period(typicallyyear) t and (1)
of skewnessgreater than 30 and was not in a position to is written
preserveit. Koutsoyiannis
andManetas[1996]relatedthe probXt= aXt-1+ bVt
(2)
lem of highskewness
to that of the determinationof the matrix
b, sincethe skewness
coefficientsare proportionalto the in- Similarly,in the case of the seasonalAR(1) (or PAR(l))

verseof matrixb(3),thatis,the matrixwhoseelementsarethe model,(1) is written


cubesof b. The closerelation of thesetwo problemscan contributeto the quantification
of the optimalityof matrixb in the
case of either consistent or inconsistent c. That is, we can set

the requirementthat the matrix b must result in as small a


coefficientof skewness
of the auxiliaryvariablesas possible.
The purposeof thisstudyis the developmentof a systematic
methodto remedyall the abovedescribedparameterestimation problemsfor both consistentand inconsistent
covariance
matrices,alsoansweringthe questionssetabove.All problems
are resolvedin a singlemanner at the groundsof an optimizationframework.A singleobjectivefunctionincorporatingall
concernsaboutthe matrix b is proposed,and a procedureis
developedfor findingthe optimal b. This procedureis based
on the nonlinearoptimizationtheory and utilizesboth the
objectivefunctionand its partial derivatives
with respectto b,
which are analyticallyderivedin this paper.
The paperis organizedin sixsections.
Section2 is devotedto
the clarificationof notationand someintroductoryaspectsof a
generalizedstochasticmodel that constitutesthe basisfor further analysis.In section3 we formulate the conditionsthat
determinean optimalmatrixb and developthe objectivefunction. In section4 we developthe numericalprocedurefor
determiningthe optimalb. In section5 we presenta casestudy
to illustratethe methodand investigatesomepracticalissues.
Section 6 is devoted to conclusionsand discussion.In addition,

thereis an appendixwherewe haveplacedan essentialpart of


the paper,that is, the analyticalderivationof the derivativesof
the objectivefunction.

2.

Stochastic

Model

In the followinganalysiswe will considera generaltype of


linear model which is typicalin stochastichydrology,that is,

Y = aZ + bV

(1)

whereY is a vectorof n stochastic


variablesto be generated,Z
is a vectorof rn stochastic
variableswith knownvalues(n and
rn maybe equalor not),V is a vectorof n randomvariateswith

Xs= asXs- + bVs

(3)

where now the matricesof coefficientsdependon the season


(typicallymonth)s. In both theseexamplesthe vectorsY and
Z havethe samedimensionn = m. In caseof the AR(2) we

haveY -= Xt andZ = [(Xt-1)T, (Xt-2)r] r (wheretheexponent T denotesthe transposeof a matrix or vector), so that
rn = 2n. Similar is the situationfor the PAR(2) model. In
anotherexample,Valenciaand Schaake's[1972, 1973] disaggregationmodel,Y representsthe n = 12m monthlyhydrologicvariablesat rn sites,whereasZ representsthe rn annual
valuesat the samelocations.Someother disaggregation
models can be also reduced in the form (1) after appropriate
assignments
of the variablesY and Z.
The model parametersa and b are typicallydeterminedby
the moment

estimators

a =Cov [Y, Z]{Cov [Z, Z]}-'

(4)

bbr= Cov[Y, Y] - a Cov[Z, Z]a r

(5)

where Cov [, ] denotesthe covariancematrix of any


two random vectors and , that is, Cov [, ] '=

E{(,-

E[,])( r - E[]r)}

with E[

] denotingex-

pectedvalue(the symbol:= standsfor equalityby definition).


Theseequationsare directgeneralizations
for the model(1) of
the equationsof the AR andPAR modelsgivenbyMatalasand
Wallis[1976,p. 63], Salaset al. [1988,p. 381], Salas[1993,p.
19.31],andKoutsoyiannis
and Manetas[1996],amongothers.
Anothergroupof modelparametersare the momentsof the
auxiliaryvariablesV. The first moments(means)are directly
obtainedfrom (1) by takingexpectedvalues,that is,

E[V] - b-{E[Y] - aE[Z]}

(6)

The variancesare by definition 1, that is,

Var [V] = [1, '-. , 1]r

(7)

The third momentsare obtainedby cubingboth sidesof (1),


where previouslywe have subtractedthe means, and then

KOUTSOYIANNIS:

OPTIMAL

DECOMPOSITION

takingexpectedvalues.Observingthat becauseof the independenceof Z and,joint third-ordertermsof (Zk - E[Zk]) and


(Vi - E [ Vi]) have zero expectedvalues,and similarly,becauseof the mutual independenceof V, joint third-orderterms
of (V - E [V]) and (Vi - E [ Vi]) alsohavezero expected
values,we get

3[V] = (b(3))-l{/.l,3[Y]-/3[aZ]}

(8)

where/3[] denotesthethirdcentralmomentsof anyrandom

OF COVARIANCE

MATRICES

1221

casewith one seriesof equalvalues(resultingin a covariance


matrixcolumnwith all zeroes);(2) whendifferentitemsof the
covariancematricesare estimatedusingrecordsof different
lengthsdue to missingdata;and (3) whena simplifiedform of
the matrixa is adopted,suchasin (10) (insteadof the complete
form (4)). To case2 we mustincorporatea ratherusualprac-

ticefollowed
inAR models
(e.g.,whenY -=Xt andZ -=Xt- 1),
whichmay causeinconsistencies
evenin the caseof usingthe
completeform of a and havingno missingdata: This occurs

covariance
matrices(e.g.,Cov[Xt,
vector, thatis,/3[] := E { ( - E[]) 3} andb(3)denotes whenthe contemporaneous
the matrixwhoseelementsare the cubesof b. (Throughoutthis

Xt]) are estimatedusing the completeseriesof length k

laggedcovariance
matrices
(e.g.,Coy[Xt, Xt- 1]) are
paperwewillextendthisnotation,
thatis,u for anymatrix whereas
estimatedusing(unavoidably)smallerlengths(e.g.,k - 1).
or vectoru and anypowerk.) Equation(8) is a generalization
Given all the aboveequationsfor the modelparameterswe
of those given by Matalas and Wallis [1976, p. 64], Todini
will focuson the estimationof b, whichalsoaffectsbothE[V]
[1980], and Koutsoyiannis
and Manetas [1996]. Moments of
and/3[V], as followsfrom (6)-(9).
order greaterthan 3 are not used,nor cantheybe estimatedin
a similar manner.

The set of equations(4)-(8) completelydeterminesthe


modelparameters,with the exceptionof (5), which doesnot

3. Formulation of the Conditions Determining


estimate
theparameter
matrixb buttheproductbbr. Thishas an Optimal Matrix b
to be decomposed,as will be discussed
in the next section.
Generally,all parameterestimationequationsinvolveonlymoments of the original variablesY and Z, either marginal of
order 1 to 3, or joint of order 2. There is an exceptionin (8),
which involvesthird momentsof a linear combinationof Z
(i.e., aZ) which cannotbe estimatedin termsof the marginal
third momentsof Zk (in fact, third-orderjoint momentsof Z
are involved,which are impracticalto use).Thus the solution
is to estimate/.3[aZ]from the availabledata for Z after estimatinga and performingthe linear transformationaZ. However,thisis an inconvenientsituationthat canbe avoidedonly
if eachrow of a containsonly one nonzeroelement(for rn _<
n), in whichcase(8) reducesto

/.I,
3[V] -- (b(3))-l{/.i,3[Y]
- a(3)/.i,3[Z]}

(9)

Apparently,(4) is not appropriateto constructsuchan a (i.e.,


with one nonzeroelementin eachrow), and thereforeit must

As statedin the introduction,the purposeof thispaperis the


quantificationof the performanceof the parameter matrix b
throughan appropriateobjectivefunctionthat incorporatesall
concernsabout that matrix. Given that objectivefunction,we
can then developan algorithmto find an optimalb.
As an initial stepfor a more convenientformulationof the
method for determiningan optimal matrix b, we perform a
standardization

is,

c := Coy[Y, Y] - a Coy[Z, Z]a r

(12)

The matrix c is, in fact, the variance-covariancematrix of the

vectorY - aZ, and thusall its diagonalelementsare positive


(theyrepresentthe variancesof the vectorcomponents).
Thus
we can standardizec usingthe diagonalmatrix

h := diag(1/f11,
' ' ' , 1/Cnn)

be replacedby a simplified
form so that if a/ is the only
nonzero element for row i, then

of b and the other matrices and vectors asso-

ciatedwith it. We call c the knownright-handterm of (5), that

(13)

so that

aik=Cov [Yi, Zj]/Var [Zj]

k =j

aik = O

k - j

c' := hch

(10)

(14)

has all its diagonalelementsequal to 1 and the off-diagonal


elementsbetween-1 and 1. (The off-diagonalelementsmay
[Yi, Z] are not preserved
in thiscase.All otherparameter slightlyviolate this rule if a is constructedby (10) or (11)
estimationequationsare still valid in their form written above insteadof (4).) If we define
with (8) replacedby its simplifiedform (10). This specialcase
b' := hb
(15)
is met, for example,in the so-calledcontemporaneous
AR(1)
and contemporaneous
PAR(l) model [Matalasand Wallis, then it is easilyshownthat (5) becomes
1976,p. 63; Salas,1993,p. 19.31].In both thesemodels,a is
b'b 'r= c'
(16)
diagonal;for example,in the latter model
As a consequence,
covariances
amongY and Z apart from Cov

and if we also define the vectors

a = diag(Cov[XI, X-l]/Var [X-I], ''' ,

Cov[Xn,Xn-1]/Var
[XSn-1])

(11)

qo:= h(3){/3[Y
] - /3[aZ])

(8)

Having expressedall basicequationsfor parameterestimation, we canreturn to the discussion


of the introduction(section 1) for the situationsleadingto inconsistent
matricesc = then (8) becomes

bbr. We mustemphasize
that the model(1) with parameter

(17)

----(b'(3))-lqo

(19)

estimators(4) and (5) and completedata setsfor estimation


The matricesh and c' and the vector qoare known,whereas
alwaysresultsin positivedefinite,that is,consistent,
c. The only
casesin which inconsistencies
may appearare (1) the trivial b' and haveto be determined;specifically,
dependson b'.

1222

KOUTSOYIANNIS:OPTIMAL DECOMPOSITIONOF COVARIANCE MATRICES

Equations
(16) and(19) constitute
thebasisfor theproposed late derivatives,
asis the casein the procedure
that we will
method.Since(16) doesnot alwayshavea real solution,we set develop.However,we recall that if

d := b'b'- c'

(20)

I111'=

and demand that all elements of d be as close to zero as

(26)

.=

possible.
Thisrequirement
canbeexpressed
mathematically
as
n

minimize
I[dl[
2'= d
i--1

isthepth normof

(21)

j=l

whenp (see,for example,


Marlow[1993,p. 59]).Therefore (25) canbe replacedby

<27>

Here we haveusedthe notationIldllfor the norm,asif d were


a vectorratherthan a matrix.(If it were a vector,then Ilall wherewehavesquared
thenormaswealready
didin (21)and
wouldbe its Euclideanor standard
norm;see,for example, (23).To avoidtheabsolute
values
withintheright-hand
sideof
Marlow[1993,p. 59].) We haveusedthe squareof thisnorm (26),it suffices
to useanevenp. Specifically,
wehaveadopted
Ildll",
instead
ofIldll,
fortworeasons:
first,because
thecompu-thevaluep = 8, whichit wasnumerically
provento be adetationsare simpler,as we will see in the next section,and quately
highfor
to approach
m {lil, i = 1, ..., n).
second,
because
it wasfoundthattheconvergence
of theopIn thisformulation
theproblemof determining
b' becomes

timization
procedure
isfasterfor Ildll"
thanfor Ildll.
a constrained
optimization
problem
withthreeelements:
(1)
In addition,we mustseta restriction
that all diagonalele- theobjective
hnction(21)to be minimized,
(2) theequali
mentsof d shouldbe exactlyzero.To justifythisrequirement, constraint
(23), and (3) the inequaliconstraint
(27). Note
we observethatif the diagonalelementsof d are zero,thenall that(27)isrelated
tob' through
(19).In termsoftheLagrangdiagonal
elements
of bbr willequalthoseof c.In turn,thiswill ian method,the objectivefunctionand the constraintscan be
resultin preservation
of thediagonal
elements
of Cov[Y,Y] combined
intotheunconstrained
minimization
problem
(asimpliedfrom(5) and(12)), thatis,in preservation
of the
variancesof all elementsof Y. The preservation
of the vari-

minimize02(b' ,

ancesmust havepriority over that of covariances


becausethe

facc) (28)
=
formeris relatedto thepreservation
of the marginaldistributionfunctions
of thecomponents
ofY. To express
thisrequire- whereX2andX3areLagrangian
multipliers
for the equali
mentmathematically,
we introducethe diagonalmatrix
constraint
(23)andtheinequaliconstraint
(27),respectively.
A generaltheoreticalframeworkfor solvingthis problem
d* '= diag (dll , ''', dnn
)
(22)
mightconsistof o steps:In the firststepwe assume
that
and demand that
X3 = 0 (i.e.,ignoretheinequaliW
constraint)
anddetermine
b'

IId*l[ = 0

byminimizing
f(b', X2,0). (Thisstepmaybeavoided
if c' is
(23) positivedefinite,in whichcaseb' canbe determined
by the
pical decomposition
methods.)
If thesolution
b' of thisstep

Anotherrestriction
arises
whenweconsider
thepreservation satisfies
(27), thenthissolutionis the optimalone.Otheise,
weproceed
to thesecond
step,wherewe performtheoptiminessof V (i.e., ) that preservethe coefficients
of skewness
of zationof the complete
function(28) (i.e.,with X3 0) to
Y are thoseobtainedby (19). However,(19) mayresultin simultaneously
determine
b', X2,andX3.For thisoptimization
arbitrarilyhighelements
of if no relevantprovision
for b is theKuhn-Tucker
optimaliconditions
(see,forexample,
Marsought.
Nevertheless,
in themodelapplication
phase(i.e.,the low [1993,p. 271]) mustbe satisfied.
generation
of synthetic
data),an arbitrarilyhighcoefficient
of
Apparently,
dueto thecomplehof theproblemequations,
skewness
ishardlyachievable.
Specifically,
it iswellknownthat an analyticalsolutionof equationsis not attainable,and a
of the coefficients of skewnessof Y. The coefficients of skew-

in a finite sampleof size k the coefficientof skewnessis

numericaloptimizationschememustbe established.

initial

bounded
[Wallis
etal.,1974;Kirby,1974;Todini,1980]between attemptto establish
a numerical
procedure
for (28) (using
--ub and {ub,where
pical available
solvers
for multivariate
constrained
optimNation)indicated
thatit isnoteasyto optimize
it simultaneously
k-2
for the parameters
b' andthe Lagrangemultipliers.
Particularly,it wasobseedthat the initiallyassumed
valuesof the
multipliers
do not advance
towardan optimalsoluIn particular,
a seriesof generated
valuesVr(r = 1, "', k) Lagrange
tion
but
remain
almost
constant
through
theconsecutive
steps
will haveskewness
q-ubonlyif all but onevalueof the series
of
numerical
optimization.
At
the
same
time,
the
objective
areequal.Apparently,
thisis not an accepted
arrangement
of
trajectothroughtheconsecthe generatedseries,and thusan acceptable
coefficient
of functionitselfhasa decreasing
owingto theappropriate
modification
of parameskewness
accmustbe muchlessthanub(e.g.,acc= 0'5ub; utivesteps,
ters
b'.
is
indicates
that
the
formulation
of
the
problem
in
fork = 1000,acc= 0.5X 10000'5
16).Consequently,
we
termsof (28) is not effectivein practice.
mustsetthe restriction
thatall n auxiliary
variables
Vi (i =
A directalternative
is the introduction
of penalties
for the
1, ..., n) havecoefficients
of skewness
lessthanacc,thatis,
constraints
[e.g.,Piece,1986,pp.21,333-340].is alternative

ub
=x/k x/

max{lil, i = 1,..., n} -<acc

(24)

(25) is similarto the Lagrangianformulationas it combinesthe

objective
functionand the constraints
into a singleunconThe handlingof (25) in an optimization
problemis not strainedobjectivefunction;aswe will see,thisfunctioncanbe
mathematically
convenient,
especially
whenwe wishto calcu- ve similarto (28). The essentialdifferencebeeen the o

KOUTSOYIANNIS:

formulations

OPTIMAL

is the fact that in the second method

DECOMPOSITION

the coeffi-

cientsX2and X3are presetweightingfactorsof the penalizing


terms, no longer determinedby the optimizationprocedure.
However,giventhe preliminarynumericalresultsdiscussed
in
the previousparagraph(i.e., the constancyof coefficientsX
throughconsecutive
steps),this differencedoesnot have any
practicalmeaningfor the problemexamined.
To formulatethe problemin termsof penaltyfunctions,we
observethat the equalityconstraint(23) is easilyincorporated
into the objectivefunction(21) by the additionof the penalty

OF COVARIANCE

MATRICES

1223

nents
of 02withrespect
to theunknown
parameters
bi' This
may seema difficulttask to accomplishanalytically.However,
it is not at all intractable.

For the convenience

of the reader we

haveplacedall calculationsof the derivativesin the appendix,


and we presenthere only the final resultsof the calculations,
which are very simple.In theseresults,givenany scalara, we
use the notation
.

termxlld*ll
, whereX2is assigned
a largevaluesothatevena
slightdeparture
of IId*11
fromzeroresults
in a significant
"pen-

doz

ab[

abe2

Oc

Ooz

ab[n

ab. abe2

alty."The inequalityconstraint(27) can be treatedin several


ways,of whichthe simplestis the additionof the penaltyterm

ab

(30)

x311llintotheobjective
function
(21),whereX3isaweighting

Ob'nl

Ob'n2

Obnn
'

factor appropriatelychosenso that the penaltyterm is small

enough
whenIIll -<

Theresulting
objective
function
in for the matrix of its partial derivativeswith respectto all b'ij,

this is an extensionof the notation used for vectors [e.g.,


Marlow, 1993,p. 208].
As shownin the appendix,the derivatives
of all components

this case is

of 02 are

(29)

Here,wehavedividedIldll
byn2, thenumberof elements
of d,
to convert
thetotal"square
error"IId[I
intoanaverage
square
errorfor eachelementof d; similarly,
wehavedividedIId*11
by
n becaused* is diagonal,so that the number of nonzero elementsisn. The coefficienthi in the firstterm of the right-hand
side of (29) may be consideredequal to 1 unlessspecified
otherwise(see the casestudyin section5). We observethat
(29) is similarto (28) from a practicalpoint of view:As discussedbefore,the theoreticaldifferencethat the parametersX
are presetconstantsin (29) hasno practicalimplication;also

= 4db'

db'

--=
db'

4d*b'

--db' = -611g
I}-w

(31)

(32)

(33)

where w is a matrix with elements

the omission
in (29) of (2cc,
whichis a constant,
apparently

Wij '= bj2


jti

(34)

doesnot affect the minimizationprocess.

A reasonable
choicefor X2to ensurethatIId*11
will be very and is a vector definedby
closetozero(sothatconstraint
(23)willhold)isX2= 103;this
.= [(b,(3))-]rt(p-)

value assigns
a weightof the averagesquareerror of the diagonal elements3 orders of magnitudehigher than that of the
off-diagonalones.For the choiceof an appropriatevalueof X3

letusassume
thatanacceptable
valueof Ildll/n2 (thefirstterm

of theright-hand
sideof (29))willbe of theorderof 10-3
(recallthat c' is standardizedso that its off-diagonalelements

arelessthan1), whereas
an acceptable
valueof x311ll},
as
resultsfrom (24) andthe subsequent
discussion,
will be (.3k/
4), wherek is the sizeof the syntheticrecordto be generated.
Assumingthat the acceptable
valuesof the first and the third
term are of the sameorder of magnitude,so that both terms
haveaboutthe samesignificance
in the objectivefunction(29),

(35)

Thustherequired
matrixof derivatives
of 02withrespect
to
theunknown
parameters
bo is
t

dO2

431

432

d-= n2rib'+

d*b'-6x311ll?w (36)

It is apparentfrom the objectivefunction(29) and its derivatives(36) that we havea typicalnonlinearoptimizationproblem,whosesolutioncanbe achievedby iterations,startingwith
an initial matrix b '[1.In the/th iteration we start with a known

b andwefindan"improved"
matrixb'[+ll; werepeatthis

we conclude
that h.3k/4 10-3 or ,3 4 x 10-3/k. We procedureuntil the solutionconverges.Severalalgorithmsare
in theliterature
foradvancing
fromb'[q tob'[+11(see
may assumethat in typicalgenerationproblemsof stochastic known
hydrologyk does exceed40, so that a maximumvalue of X3 Maysand Tung[1996,p. 6.12], amongothers,who summarize
mustbe about10-4. We mayusea smallervalueof X3(e.g., the most commonones).Among them, the most suitablefor
10-5-10-6) eitherif the samplesizeis of a higherorderof our caseare the steepestdescentand Fletcher-Reevesconjumagnitudeor if we wish to give a greater importanceto the gate gradientmethods.These are chosenfor their mathematpreservationof covariances
rather than to that of the coeffi- ical simplicityand convenienceand their low memoryrequirecients of skewness.
ments. Specifically,the mathematical formulation of both
The establishment
of a numericalprocedure
for minimizing methods,which typicallyassumea vector arrangementof the
unknown parameters,can be directly adapted for our case,
the objectivefunction(29) is givenin the next section.

which
uses
a matrix
arrangement
oftheunknown
parameters.

4.

Optimization Procedure

Furthermore,the memoryrequirementsare of great importancebecausein the problemexaminedthe numberof un-

islarge,thatis,n2; theamountof memory


To establishan effectiveoptimizationalgorithmfor (29), we knownparameters
need first to determine the partial derivativesof all compo- locationsfor both chosenmethodsis of the order of n 2,

1224

KOUTSOYIANNIS: OPTIMAL DECOMPOSITION OF COVARIANCE MATRICES

Table 1. SampleStatisticsof the MonthlyRainfalland Runoff Data Usedfor the Case

Studyfor Subperiods
s = 8 (May)ands - 1 = 7 (April)
i=1

i=2

i=3

i=4

i=5

i=6

(Evinos (Evinos (Mornos (Mornos (Yliki


(Yliki
Runoff) Rainfall) Runoff) Rainfall) Runoff) Rainfall)

Location
Means

97.3
53.1

111.2
69.5

59.8
43.2

100.4
65.3

20.1
9.4

31.5
19.3

35.0
20.2

57.6
30.8

17.4
18.8

53.6
26.5

10.7
7.2

27.1
14.4

Standard deviations

StDev[X,S.]
StDev[X]
Coefficients of skewness

cs[xf
cs[xf]
Cross-correlation

0.72
0.76

1.04
0.89

0.58
0.87

1.07
0.81

1.06
1.53

1.72
1.49

1.00
0.76
0.85
0.56
0.22
0.16

0.76
1.00
0.67
0.90
0.30
0.43

0.85
0.67
1.00
0.58
0.69
0.14

0.56
0.90
0.58
1.00
0,37
0.50

0.22
0.30
0.69
0.37
1.00
0.83

0.16
0.43
0.14
0.50
0.83
1.00

1.00
0.54
0.76
0.53
0.57
0.49

0.54
1.00
0.29
0.81
0.15
0.67

0.76
0.29
1.00
0.20
0.75
0.23

0.53
0.81
0.20
1.00
0.27
0.45

0.57
0.15
0.75
0.27
1.00
0.22

0.49
0.67
0.23
0.45
0.22
1.00

0.60

0.78

0.80

coefficients

Corr[X-, Xj- ]
j=
1
J=2
J=3
J=4
j=5
J=6

Co[xf,
j=
1
j=2
j=3
j=4
j=5
j=6
Autocorrelation

coefficients

Corr[X,X- i]

whereasfor other methodssuchas quasi-Newton


methodsit
would be n 4.

b'[1
istostartdecomposing
thematrix
c' bytriangular
decom-

position,
whichis the simplest
of all procedures
[Press
et al.,
Boththesemethodsaredescribed
bythefollowingcommon 1992,p. 97], and occasionally
performingsomecorrections
expression,
adaptedfrom the typicalexpression
givenin the when this procedurefails. This will be demonstratedmore
literaturefor vector-arranged
variables
[e.g.,Maysand Tung, clearlyin thecasestudybelow(section5).
1996,p. 6.12;Presset al., 1992,p. 422],
db']

q-

5. Case Study

dbt/[l
111 (37)To illustratethe developedmethod,we presenta partof a

realworldcasestudy.Thiscasestudyaimsat thegeneration
of
simultaneousmonthly rainfall and runoff at three basins,
[/-1] 2
namely,Evinos,Mornos,andYliki, supplying
waterto Athens,
Greece[Nalbantis
andKoutsoyiannis,
1997].Thisconstitutes
a
multivariate
generation
problemwithsixlocations
(twovarifor the Fletcher-Reeves
method,and[/+ 1] is a scalarwhose ablesx threebasins).
As a general
framework
for thegenervalueis obtainedby a line searchalgorithmfor eachiteration ation, the simpledisaggregation
model [Koutsoyiannis
and
l. For l = 0 (firststep)the Fletcher-Reeves
methodcannotbe Manetas,1996]was adopted.Specifically,
as a first stepthe

where3,[/1-- 0 forthesteepest
descent
method,

3,[l]:11(d02
2/ d02
II

38>

usedbecause
(dO2/db')
[-1 is not definedandthuswe must annual
variables
foralllocations
aregenerated
using
a multiusethesteepest
descent
method(i.e.,3,[1= 0). Fortheother variateAR(1) model.Thentheannualquantities
aredisaggre-

steps,the numerical
applications
haveindicated
thatthe gatedinto monthlyones,usinga combinationof a multivariate

Fletcher-Reeves
methodis fasterand thuspreferable.However,it wasobserved
that in someinstances
duringthe iteration procedurethe Fletcher-Reeves
methodmaybecometoo
slow;in suchinstances
theprocedure
is accelerated
if weperform oneiterationwith the steepestdescentmethodandthen

contemporaneous
seasonal
AR(1) (or PAR(i)) modeland an
accurateadjustingprocedure,as describedby Koutsoyiannis

andManetas[1996].Here we focuson the PAR(i) model,


describedby (3), whichgeneratesthe vectorof randomvari-

ablesXs at the sixlocations


for eachmonths, giventhoseof

continueagainwith the Fletcher-Reevesmethod.


What remainsis a procedureto constructan initial matrix

theprevious
monthXs- . Themodelparameters
aregivenby
(11),(5), (6), (7), and(9), in whichY andZ mustbe replaced
b'[1.
Thisisnotsoimportant
because,
asdemonstrated
in the withX andX- , respectively.
casestudybelow(section5), the methodisveryfastin the first
In Table1 we display
the samplestatistics
of the monthly
phaseof the iterationprocedure.Evenif we startwith a b'[1
leadingto an unrealistically
highvalueof the objective
function, thisvalueis dramaticallyreducedin the first few iterations.The generalideafor the construction
of the initial matrix

rainfallandrunoffdatathat arenecessary
for estimating
the
contemporaneous
PAR(i) model parametersfor subperiod
(month)s = 8 (May;notethat the hydrologic
yearstartsin
October).Theseare the first threemarginalmomentsat all

KOUTSOYIANNIS:

OPTIMAL

DECOMPOSITION

locations(representedby the means,standarddeviations,and


coefficients
of skewness)
andthe matricesof crosscorrelations
for all locationsfor both subperiodss = 8 and s - 1 = 7; the
autocorrelationsamongsubperiodss = 8 and s - 1 = 7 for
eachlocationare alsoneededand givenin Table 1. Note that
the autocorrelationof rainfall was found statisticallyinsignificant and thus the autocorrelation coefficientswere set to zero;

OF COVARIANCE

MATRICES

1225

also attemptsto preseryethe highestcross-correlation


coefficient in each matrix row, simultaneously
reducingthe other
cross-correlationcoefficients.As shown in Table 2, the maximum coefficient of skewness of the initial solution 0a has been

reduced to 18.31, a value much less than the 29,089.50 of the

initialsolution0; IId*llis almostzeroin the initialsolution0a,


whereasIldllunavoidably
remainspositive.

on the contrary,the autocorrelationof runoff is significantfor


Given the initial solutions0 and 0a, we can directlyproceed
all three basins.To increasereadabilityand understandability to the optimizationproceduredescribed
in section4.To this
of the data, the form of the statisticsgivenin Table 1 differs endwe usethe objectivefunction(29) and its derivatives(36)
X = 1, /2= 103,andX3= 10-s asexplained
in
from that displayedin the equations(e.g.,coefficients
of skew- withweights
ness instead of third moments; matrices of correlation coeffi- section3. The evolutionof the objectivefunctionand its three
cientsinsteadof covariancematrices);the transformationbe- componentsfor the first 40 iterationsare shownin Figure 2 for
tween the two forms is direct and givenin textbooks.
both initial solutions0 and 0a. We observein Figure 2 that the
There seemsto be nothingstrangeaboutthe statisticsgiven unreasonable
valueof IIll - 29,089.50of the initialsolution0
in Table 1 as the cross correlations and autocorrelations
are
decreasesrapidlyin the first 5-10 iterationstowardmore reanot too high;the skewnesses
are not zero and alsoare not too sonable
values.Also the positivevalueof IId*llof the initial
high(theydo not exceed1.75).However,the matrixc of (12) solution 0 decreasesrapidly in the first four iterations to a
(andc' of (14)) is not positivedefinite,andthustheredoesnot value closeto zero. As expected,the performanceof the evoexistan exactsolutionfor b (or b'). Notably,the absenceof lution of the initial solution 0a seems better than that of the
positivedefiniteness
occursin 10 out of 12 cases(months)for initial solution 0 at the same iteration number. However, the
the monthlyPAR(l) model as well as in the annualAR(1) finally obtainedvaluesof the objectivefunction and its commodel, in the case study examined.This indicatesthat the ponentsare the samefor both initial solutions.This indicates
problemstudiedin this paper mightbe quite frequentin mul- that there is no need to use a complicatedprocedureto obtain
tivariate stochasticsimulationproblems.
aninitialsolution
b'[1.
It suffices
to usethesimple
Lane[1979]
To startremedyingthe problemfor s - 8, we needan initial procedurewith the modificationthat the diagonalelementsof
solution
b'[1.To thisendwe applyLane's[1979]procedure, b' are not allowedto take valuessmallerthan brain. In the
which is appropriatefor positivesemidefinitematrices(see initial solution0 of this examplewe assumedthat b nin= 0.05;
also Salaset al. [1988, p. 87] and Lane and Frevert[1990, p. however,we suggesta muchhighervalue,e.g.,b nin= 0.5, to
V-15]). This procedurederivesa lowertriangularb' giventhe avoid unreasonablyhigh initial skewnesscoefficientsand acmatrix c', first calculatingthe first matrix column, then the celerateconvergence.
The final solutionsb', referred to as final solutions1 and la,
second,third, etc. When the matrix c' is not positivedefinite
(asin our case),at somecolumn(in our caseat column5) the which were obtainedby the optimizationprocedurestarting
procedureassignsall matrix elementsequal to zero and, be- with the initial solutions0 and 0a, respectively,are shown
sides,the estimated
matrixb' no longerobeysb'b'r = c'. graphicallyin Figureslc and ld. They are almostindistinguishFurthermore, a zero column within b' is also transferred in able,but they are not exactlythe same.The resultingvaluesof
b'(3)sothat(b'(3))-, whichappears
in (19), doesnot exist. the objectivefunction and its components,shownin Table 2,
Thus the vector of the skewnesscoefficientsof the auxiliary are the same for both final solutions1 and la (for normal
variables,t,becomesinfinite.To avoidthis,it sufficesto setthe computerprecision).This may meanthat there are (at least)
diagonalelement of the columnwith zero elementsequal to two closelocalminimaof the objectivefunction.It may alsobe
somearbitraryvalueb nin(SOthat 0 < b nin< 1). There is no interpreted differently, that is, both obtained solutionsare
reasonto investigatethoroughlywhat would be the most ap- approximationsof a single global minimum that cannot be
propriate value of b 'ninbecausethis is used to establishan locatedexactlydue to computerprecisionlimitations.Neverinitial solutiononly;thisinitial solutionis then modifiedby the theless,the exacttheoreticalmeaningof the nature of the two
optimizationprocedure.Here we haveusedb ain= 0.05, and solutionsis not important;what is important is that we have
the derived solution is thereafter referred to as initial solution
one or more reasonablematricesb' and other related param0. The elementsof b' for that solutionare showngraphicallyin eters that can be used directly for stochasticsimulation.This
Figure la. The resultingvaluesof the objectivefunctionand its simulationwill preservethe variancesof the variablesbecause
of skewness,
because
the
three components
are shownin Table 2 (wherefor the sakeof IId*ll- 0, aswell astheircoefficients
readabilitywe displaythe squarerootsof the components).
We obtained
maxi(s) = 5.37 is lowenoughto be achieved
for
observe that for the initial solution 0 the maximum coefficient
the auxiliaryvariables(assumingthat the syntheticsamplesize
of skewness
of the auxiliaryvariablesis extremelyhigh,that is, will havelengthgreaterthan about50). However,the simulamax.
(i) IIll - 29,089.50.
Apparently,
suchskewness
can- tion will not preserveexactlythe cross-correlationcoefficients
not be reachedin anygeneratedsample.Also,we observethat (sinceIIdll- 0.1404> 0), but thisis an unavoidable
conseIIdllis notzeroandit couldnotbe sobecause
c' is not positive quenceof the absenceof positivedefinitenessof c'. To havean
definite;alsoIId*11
is not zero.
indication of how large the errors in preservingthese crossFor comparisonwe have also determined another initial correlation coefficientsare, we have performed the inverse
solution referred to as initial solution 0a, and also shown calculations,
that is, given b' we solved(12) for Cov [Y, Y]
graphicallyin Figure lb. For thissolutionwe haveuseda more (whereY -- XS), alsoreplacing
c with (h-b'b'rh-). The
coefficientsare shownin Table 3
complicatedprocedureoutlinedby Koutsoyiannis
[1992].This resultingcross-correlation
procedureimposesa lowerlimit in eachdiagonalelementof b' amongwith their differencesfrom the "correct"valuesof Tato avoidextremelyhighskewness
coefficients,assuresthe pres- ble 1, as the latter were estimated from the historical data. We
ervationof the diagonalelements
of c' (sothat IId*ll 0), and observethat errorsare almostnegligible(i.e., within _+0.03).

1226

KOUTSOYIANNIS:

OPTIMAL

DECOMPOSITION

OF COVARIANCE

'61

MATRICES

56

(c) 'I-i

-?1.0 (d)
--

0.5

....
_

.....

.....
::

... -P':i0.0

0.5 '''
(e),

o
:.

(fi

.o

b' u

.:-t..

."'-.-.,.(

'"'.!.:,

Figure 1. Plotsof the elementsof matricesb' for the varioussolutionsexaminedin the casestudy:(a) initial
solution0; (b) initialsolution0a;(c) finalsolution1; (d) finalsolutionla; (e) finalsolution2; (f) finalsolution
3 (seetext).
As a further investigation,we also assessedthe dltimate
To acquirean indicationof how low the error in preserving
the crosscorrelationscould ultimately be if we ignore com- lowest
valueof thecoefficient
of skewness
maxj({j)byignorpletely the preservationof skewness,
we performedanother ing the error in preservingcrosscovariances.Thus we peroptimization,setting3 = 0 in the objectivefunction.The formedanotherminimizationof the objectivefunction,setting
finalsolution
3
resultingfinal solution2 is showngraphicallyin Figure le, and = 0, 2 = 103,and3 = 10-s-Theresulting
the relevant values of the objectivefunction and its compo- is showngraphicallyin Figure lf, andthe relevantvaluesof the
nents are shown in Table 2. We observe that the further reobjectivefunctionand its componentsare shownin Table 2.

ductionin Ildllis not impressive


(11dll
= 0.386againstIldll=
0.1404of final solution1).

Again,the furtherreductionin maxi({) is not impressive


(max
i (g.) = 4.57against
maxj(g.) = 5.37of finalsolution
1).

Table2. Valuesof theSquareRootof theObjective


Function0 andIts ThreeComponents
Ilall,Ila*11,
andI111

(Also,in Comparison
Withmaxj({)) fortheInitialandFinalSolutions
of theCaseStudy
Examined
Initial Solution 0

Initial Solution 0a

Final Solutions 1 and la*

Final Solution 27

Final Solution 35

Ila*ll
Ilall
IIll

0.3238
0.3297
29,089.50

0.0012
o.6394
18.31

0.0000
o.1404
5.50

0.0000
o.1386
7.94

0.0000
o.8136
4.58

max(j)

29,089.50

18.31

5.37

7.19

4.57

4.1806

0.1076

0.0292

*Optimum
forthecombination
of IlallandI111
(-1= 1, '2 = 103,X3= 10-s).

?Optimum
for Ilall(-1-- 1, '2 -- 103,X3= 0).

$Optimum
forI111,
(x - 0, '2 -- 103,X3= 10-s).

0.0231

0.0145

KOUTSOYIANNIS:

OPTIMAL

DECOMPOSITION

OF COVARIANCE

MATRICES

IId*11,
Ildll,e

1227

I1{11

tl........

tS'utinO
S'utinOa/

10[[?!!!!5!!!
......
!!!!:.!!!!!
......
!!!!!!!:.!:.
.....
t : IId*ll---e'--IId*11

1000

100

0.1 Y''0z9:... ......

-:::

10

12

14

:;;;;

16

18

7; r..

20

22

24

26

28

...........

30

32

10

36

Iteraaon

38

40

Final

number

Eiure . Evolufioo thevalueo thesuarcrooto theobjective


fucfio 8 ad tsthreecomponents
Ilall,
Ila*11,
,.d I111
o, th ,,t 40JtcratJo.s
otheoptJmJzatJo.
procedure
startJ.withJ.JtJa]
solutJo.s
0 a.d Oa(see
tc).

The resultsof the aboveinvestigations


indicatethat there is
no strong conflict between the objectivesof preservingthe

observe that all final solutions 1, la, 2, and 3 are similar to each

coefficientsof the modelvariablesby keepingthe skewnessof


the auxiliaryvariablesaslow aspossible.Analyticalexpressions
for the derivativesof thisobjectivefunctionare derived,which
allow the developmentof an effectivenonlinear optimization
algorithmusingthe steepestdescentor the conjugategradient

other, althoughthey differ significantlyfrom the initial solu-

methods.

coefficient

of skewness and the cross-correlation

coefficients.

The sameindicationis alsoacquiredfrom Figure 1, wherewe

tions 0 and 0a.

An advantageof the methodis its uniqueformulation,applicablefor both positivedefinite or indefinite matrices,and


symmetricor skeweddistributionsof variables.Besides,the
6. Summary, Conclusions,and Discussion
weightingfactorsincorporatedin the objectivefunctionallow
A new methodis proposedfor decomposing
covariancemafor giving more or less emphasisto each one of its three
trices that appear in the parameter estimationphase of all
components.In the case of a consistent(positivedefinite)
multivariatestochasticmodelsin hydrology.This method apcovariancematrix of variableshavingsymmetric(e.g., Gaussplies not only to positivedefinite covariancematrices(as do
the methodwill resultin the triangulardethe typicalmethodsof the literature)but to indefinitematrices, ian) distributions,
composition
of
the
matrix, which is the simplestamong all
too, that often appearin stochastichydrology.It is alsoapprosolutions. If the covariance matrix is indefinite while the varipriate for preservingthe skewnesscoefficientsof the model
variablesas it accountsfor the resultingcoefficientsof skew- ablesstill have symmetricdistributions,the methodwill result
(squareroot) matrix,which corresponds
to
nessof the auxiliary(noise)variablesusedby the stochastic in a decomposed
model. The method is formulated in an optimizationframe- the least significantalteration, or the best approximation,of
work with the objectivefunction being composedof three the originalcovariancematrix. If the distributionsof variables
components
aimingat (1) completepreservationof the vari- are skewed,then the methodsolution(either for consistentor
covariancematrices)will be appropriatelymodiancesof variables,(2) optimal approximationof the covari- inconsistent
ancesof variables,in the casethat completepreservationis not fied to simultaneouslyaccountfor avoidingtoo high coeffifeasibledueto inconsistent
(i.e., not positivedefinite)structure cientsof skewnessof the auxiliaryvariables.
of the covariancematrix,and (3) preservation
of the skewness The real world applicationexaminedfor the sake of illus-

Table3. Cross-Correlation
Coefficients
Corr[X,Xj] Resulting
fromb' of theFinal
Solution

la
i=1

j=
1
j=2
J=3
j=4
j---5
J-6

1.00(0.00)
0.51(-0.03)
0.77(0.01)
0.54(0.01)
0.54(-0.03)
0.49(0.00)

i=2

0.51(-0.03)
1.00(0.00)
0.28(-0.01)
0.80 (-0.01)
0.17 (0.02)
0.67 (0.00)

i=3

0.77(0.01)
0.28 (-0.01)
1.00(0.00)
0.20 (0.00)
0.74(-0.01)
0.23 (0.00)

i=4

0.54(0.01)
0.80(-0.01)
0.20(0.00)
1.00(0.00)
0.26(-0.01)
0.45(0.00)

i=5

0.54(-0.03)
0.17(0.02)
0.74(-0.01)
0.26(-0.01)
1.00(0.00)
0.22(0.00)

Thevalues
in parentheses
arethedifferences
fromthecorresponding
values
of Table1.

i=6

0.49(0.00)
0.67 (0.00)
0.23 (0.00)
0.45 (0.00)
0.22 (0.00)
1.00(0.00)

1228

KOUTSOYIANNIS:

OPTIMAL

DECOMPOSITION

OF COVARIANCE

MATRICES

tration and numericalexplorationof the methodindicatesits


n n
Odk n n
t
very satisfactory
performanceboth in approachingthe covari(A7)
ob
o
ancesof an inconsistent
matrix and in yieldinglow coefficients
k=l
l=1
k=l
l=1
of skewnessof the auxiliaryvariables,althoughthe initial coefficientsof skewness
were extremelyand unreasonably
high. or
Moreover, it revealsthat there is no strongconflictbetween
the objectivesof preservingthe covariancesand the coefficientsof skewness.
Finally,it indicatesa stablebehaviorof the
Ob =
/=1
optimizingalgorithm.
The stochasticmodel that was used as a groundsfor the and, becaused is symmetric,
development
of the method(section2) is generalizedsoasto
representmany of the typicalmodelsof the literature. However, it doesnot coverexplicitlyall possiblemultivariatemodOb
els, for example,the ARMA and PARMA models[e.g.,Stedingeret el., 1985; Seles,1993]. In thesetypesof modelsthe We observethat the sumon the right-handsideof (A9) is the
problem of decomposingcovariancematricesstill appears (i, j)th elementof the matrixrib, and thusthisproves(31).
[Seles,1993,pp. 19.29-19.30],and the proposedmethodmay
In a similarmanner(and alsoconsidering
(22)), the partial
alsobe used.Some adaptationsof the equationsare needed,
derivative
of Id,ll2withrespect
tobi is
mainlythoseregardingthe coefficientsof skewness.
Generally,
the methodallowsfor adaptationsnot onlyin casesof different
n
Od__ n
t
t
typesof modelsbut also in situationswhere different condik=l
k=l
tionsarisefor the solutionmatrix.As a simplifiedexamplefor
a conditionalrearrangementof the method,let us considerthe

E E 2d,l
=2E E d,l(b
kjli
q-bjki)

011dll'
2 dlabj
+2 dilblj (A8)
lldll'
4 d,$q

case where the model variable

(A9)

2d- - 2 dk(b
jS
+bj8ki)

of one location has a known

= 4 d, jJki
= 4di,
bJj
(A10)
value (e.g.,givenas an outputof anothermodel).Obviously,
k=l
thisimpliesa linear constraintfor the elementsof a singlerow
of the decomposed
matrix.This constraintcan be easilyincorwhichproves(32).
poratedin the optimizationframework,the simplestwaybeing
To determine
thepartialderivatives
of I111,wemustfirst
the appendingof a penaltyterm into the objectivefunction.
find the partial derivativesof g. We denote
Appendix: Proof of (31)-(33)

g := (b'(3))-

(A11)

g = gqo

(A12)

sothat (19) becomes

From (20)we have

dtcl
= E brbr-Cl

(A1)

Since q=is a vector of constantswe have

r=l

Og

Og

,=
,
obo obo

so that

(a3)

Equation(All) canbe written as

gb'(3)= I

(A14)

where I is the identi matrk. Therefore

Clearly,

0g

Obr

, = 8i8
Obo

0b(3)

, = o
b' (3)+
g Obo

(A3)

(a5)

where0 is the zeromatrk (i.e.,with all its elementszero), and

where

thus

8o:=0

ij

8o-= 1

i=j

0g

(a4)

Therefore,

0b' (3)

0b' (3)

=
, g
=, (b'))-
-g Obo
Ob$ g Ob
o

(A16)

Combining(A13), (A16), and (A12), we get

Odkl
---r -

bkrlirjq-

Obij r=l

0g

blrSkiSrj

r=l

....
obo
,

(AS)

-g

0b(3)
obo
, g

0b(3)

, g
gobo

(a7)

Consequently,
which results in

Odk
t

obo

- bj81iq-bjSki

__ n n Ob
;r3
Ob
i. r=s=

t9
k --EEgks
19%---t
r (A18)

----7--

(A6)

Thepartialderivative
of Ildll
2withrespect
tobi willbe

and since

KOUTSOYIANNIS:

OPTIMAL

Obtsr3

-- = 3bsr
'2 sisrj
obij

DECOMPOSITION

(A19)

we get

OF COVARIANCE

MATRICES

1229

their constructivecommentsthat were very helpful for an improved,


more complete,and more accuratepresentation.
References

Obo

-3 Z

r=l

srSsiSrjr

(A20)

s=l

or

Bras,R. L., and I. Rodriguez-Iturbe,RandomFunctionsin Hydrology,


Addison-Wesley,Reading,Mass., 1985.
Grygier, J. C., and J. R. Stedinger,SPIGOT, A syntheticstreamflow
generationsoftwarepackage,technicaldescription,version2.5, Sch.
of Civ. and Environ. Eng., Cornell Univ., Ithaca, N.Y., 1990.
Kirby,W., Algebraicboundness
of samplestatistics,WaterResour.Res.,
10, 220-222, 1974.

Ok
b'2
= --3Z gki
ir rjr

Ob
ij

(A21)

r=l

Koutsoyiannis,D., A nonlinear disaggregationmodel with a reduced


parametersetfor simulationof hydrologicseries,WaterResour.Res.,
28, 3175-3191, 1992.

Koutsoyiannis,
D., and A. Manetas,Simpledisaggregation
by accurate
adjustingprocedures,WaterResour.Res.,32, 2105-2117, 1996.
Lane, W. L., Applied stochastictechniques,user'smanual, Eng. and

and finally,

Res. Cent., Bur. of Reclamation, Denver, Colo., 1979.

, = -3g:ib
abij

(A22)

Lane, W. L., and D. K. Frevert, Applied stochastictechniques,user's


manual, personalcomputerversion,Eng. and Res. Cent., Bur. of
Reclamation, Denver, Colo., 1990.

Now we can proceedto the calculationof the derivativesof

York, 1993.

IIll. Assuming
thatp iseven,(26)canbewritten
Ilgllg
=

Marlow, W. H., Mathematicsfor OperationsResearch,Dover, New

(A23)
=

Matalas,N. C., and J. R. Wallis, Correlationconstraintsfor generating


processes,paper presented at the International Symposiumon
MathematicalModels in Hydrology,Warsaw,Poland,July 1971.
Matalas, N. C., and J. R. Wallis, Generation of syntheticflow sequences,in Systems
Approachto WaterManagement,editedby A. K.
Biswas, McGraw-Hill, New York, 1976.

so that

Mays, L. W., and Y.-K. Tung, Systemsanalysis,in WaterResources


Handbook,edited by L. W. Mays, McGraw-Hill, New York, 1996.
Mejia, J. M., and J. Millfin, Una metodologiapara tratar el problema
de matricesinconsistentes
en la generati6n multivariadade series
hidro16gicas,
paper presentedat VI CongresoLatino Americanode
Hydrafilica,Bogota,Colombia,1974.
Nalbantis,I., and D. Koutsoyiannis,A parametricrule for planning
and managementof multiple-reservoirsystems,WaterResour.Res.,

Combining(A22) and (A24)we get

33, 2165-2177, 1997.

Pierre, D. P., OptimizationTheoryWithApplications,Dover, New York,

, = _6IlgllL"
ijj Z k-lgki
0ab
IIgllg
ij
k=l

1986.

(A25)
Press,W. H., S. A. Teukolsky,W. T. Vetterling, and B. P. Flannery,
NumericalRecipesin C, CambridgeUniv. Press,New York, 1992.
Rasmussen,P. F., J. D. Salas,L. Fagherazzi,J.-C. Rassam,and B.
Bob6e, Estimation and validation of contemporaneousPARMA

whereascombining(All) and (35) we have

models for streamflow simulation, Water Resour. Res., 32, 31513160, 1996.

E -lg:i= i

(A26)

k=l

Salas,J. D., Analysisand modelingof hydrologictime series,in Handbookof Hydrology,edited by D. Maidment, chap. 19, McGraw-Hill,
New York, 1993.

Salas,J. D., J. W. Delleur, V. Yevjevich, and W. L. Lane, Applied


Modelingof HydrologicTime Series,Water Resour. Publ., Littleton,

Also considering(34) we get

Colo., 1988.

Z -lgki bijji = Wij

(A27)

Slack,J. R., I would if I could (self-denialby conditionalmodels),


Water Resour. Res., 9, 247-249, 1973.

Stedinger, J. R., D. P. Lettenmaier, and R. M. Vogel, Multisite


ARMA(1, 1) and disaggregation
modelsfor annualstreamflowgen-

k=l

eration, Water Resour. Res., 21,497-509, 1985.

so that, finally,

--ab = -611gll-w/

(A28)

whichproves(33).

Todini, E., The preservationof skewnessin linear disaggregation


schemes,J. Hydrol., 47, 199-214, 1980.
Valencia, D., and J. C. Schaake,A disaggregationmodel for time
seriesanalysisand synthesis,Rep. 149, Ralph M. ParsonsLab. for
Water Resour.and Hydrodyn.,Mass.Inst. of Technol.,Cambridge,
1972.

Valencia,D., and J. C. Schaake,Disaggregationprocesses


in stochastic
hydrology,WaterResour.Res.,9, 211-219, 1973.
Acknowledgments.The researchleading to this paper was performedwithin the frameworkof the projectEvaluationand Management of the Water Resourcesof Sterea Hellas, Phase 3, project
9476702, funded by the Greek Ministry of Environment,Regional
Planningand PublicWorks,Directorateof Water Supplyand Sewage.
The authorwishesto thank the directorT. Bakopoulos,the staffof this
directorate,and E. Tiligadasof DirectorateD7 for the supportof this
research.Thanksare alsodue to I. Kioustelidisfor his helpful suggestionsand I. Nalbantisfor his comments.Finally, the author is grateful
to the Associate Editor

P. Rasmussen

and the reviewer

E. Todini

for

Wallis, J. R., N. Matalas, and J. R. Slack, Just a moment!, Water


Resour. Res., 10, 211-219, 1974.

D. Koutsoyiannis,
Departmentof Water Resources,Facultyof Civil
Engineering,National TechnicalUniversity,Heroon Polytechniou5,
GR-157 80 Zografou,Athens,Greece.(dk@hydro.ntua.gr)

(ReceivedMarch 26, 1998;revisedNovember13, 1998;


acceptedNovember13, 1998.)

You might also like