You are on page 1of 15

Department of Numerical Analysis and Computer Science

TRITA-NA-0307
A Toolbox for Multi-Attribute Decision-Making
Joel Brynielsson and Klas Wallenius
Joel Brynielsson and Klas Wallenius
A Toolbox for Multi-Attribute Decision-Making
Report number: TRITA-NA-0307
Publication date: December 2003
E-mail of author: {joel, klasw}@nada.kth.se
Reports can be ordered from:
Numerical Analysis and Computer Science (NADA)
Royal Institute of Technology (KTH)
SE-100 44 Stockholm
SWEDEN
telefax: +46 8 790 09 30
http://www.nada.kth.se/theory/dsg/
A Toolbox for Multi-Attribute Decision-Making

Joel Brynielsson and Klas Wallenius


Department of Numerical Analysis and Computer Science
Royal Institute of Technology
SE-100 44 Stockholm
SWEDEN
{joel, klasw}@nada.kth.se
Abstract
There are obvious opportunities to incorporate multiagent simulation in
decision support tools for military commanders. To be successfully in-
tegrated, however, the simulation tool must t into the overall decision
process, in which the commander is involved. In this paper we propose
some important properties of simulation-based tools for decision-making
based on dierent theories on how decisions are made. Our theoretical
contribution is a denition of a non-linear utility function that should
better t prevailing cognitive models of decision-making than traditional
linear utility functions do. Finally we specify a toolbox that, based on
the utility function, multiagent-based simulation and genetic algorithms,
may be used to evolve strategies and to support decision-making in the
Command and Control domain.
1 Introduction
The use of simulation to evaluate optional strategies has been the dream of, es-
pecially military, decision-makers for a long time. Standards, such as HLA, for
distributed large scale and very complex simulations have until recently been the
dominant manifestation of this desire [Gagnon and Stevens 1999]. Multiagent-
based simulation, as opposed to the large-scale approach of simulation, aims to
better capture the non-deterministic and non-linear components of the combat
problem [Horne 1999].
The ISAAC system (Irreducible Semi-Autonomous Adaptive Combat) is one
among other eorts in this latter direction. ISAAC tries to catch the essence
of combat by dening a rather small set of features of the war-ghting agents.
With these agents, user-dened strategies for a blue side can be evaluated
against a red side over and over again, each time with dierent initial con-
straints. By oering the possibility to explore the vast outcome space of such
repetitive simulations, an understanding of the dynamics of the battle space can
be achieved. New strategies can also be automatically evolved by use of genetic

Originally written in 2001 as an examination assignment at the Swedish National Defence


College.
1
algorithms, thus letting the system search for possible solutions [Ilachinski 1997,
1998].
We see obvious opportunities to use multiagent-based simulation, not only to
evaluate but also to evolve alternative strategies for decision-making. We will,
based on the experiences from the ISAAC system together with dierent theories
on decision-making, specify a decision process and a tool, The Strategy Evolution
Toolbox, for evolving strategies in the Command and Control domain.
2 Theoretical Foundations on Decision-Making
According to Kleindorfer et al. [1993] the many theories involved in decision-
making can be classied as being either prescriptive in that they prescribe which
decisions should be made or descriptive in that they describe how they are made.
Following this classication, prescriptive theories like Decision Theory [Berger
1985; Jaynes 1996; Raia 1968] study rational behavior in which the optimal de-
cision maximizes the expected utility. In Game Theory [Myerson 1991; Osborne
and Rubinstein 1994] there are two or more parties that, interdependently, aim
to maximize utility.
The descriptive theories, on the other hand, study the psychological or social
processes of decision-making. In Cognitive Psychology [Montgomery 1992] it
has been shown that the decision process of an individual follows a set of rather
simple principles to compare dierent decision alternatives. One example of
such principles is the conjunctive rule, which states that decision alternatives
not complying with all requirements should be rejected. This rule, however, is
derived from studies of situations where explicit decision alternatives are pre-
sented to the individuals. In contrast, psychologists in the discipline of Natural-
istic Decision-Making [Klein 1989] have shown that decision-making in natural
settings is a process tightly connected to the process of understanding the prob-
lem. Once the problem is fully understood, the decision-maker also has the
solution for it.
The conclusion is that tools to support decision-making should assist the users
to make well-founded decisions according to the prescriptive theories. The tools
should, at the same time, make the users feel comfortable in their work by sup-
porting natural decision processes according to the descriptive sciences.
Consequently, we will later on specify a toolbox that supports the concurrent
processes of dening the problem and solving it, according to naturalistic deci-
sion processes, see Figure 1. By setting up requirements and by quantiying the
signicance of dierent consequences, the user will be able to make an exact def-
inition of his preferences. The tentative solutions, or strategies, may be invented
either by the decision-maker or by the system. These strategies may then be
evaluated by the use of agent-based simulation to predict the expected conse-
quences of the solution, and by evaluating the predicted consequences against
the preferences according to principles from Decision Theory. The process is
iterative in the sense that after drawing conclusions from one set of strategies,
2
Evolve
Strategies
User or
Genetic Algorithm
Predict
Consequences
Agent-Based
Simulation
Define
Preferences
User
Evaluate
Fitness
Utility Function
Evolve
Strategies
User or
Genetic Algorithm
Predict
Consequences
Agent-Based
Simulation
Define
Preferences
User
Evaluate
Fitness
Utility Function
Figure 1: The decision process supported by the Strategy Evolution Toolbox.
the user may either evolve new strategies or change the preferences to start a
new evaluation cycle.
We note that the rational behavior approach assumes that the value of the ex-
pected consequences of the decisions can be measured in one scale (the expected
utility) even when the consequences are of dierent qualities. We also make the
observation that there is a gap between this view and the goal-oriented view
of the descriptive decision sciences. The value of a decision, according to the
descriptive sciences, is measured against a set of goals or requirements, rather
than against a utility scale. This is especially true in the domain of Command
and Control, where the commander is to solve a task with associated goals and
requirements [Coakley 1991]. We will in the following sections investigate the
properties of the concept of utility maximization versus decision-making based
on requirements, in order to design proper instruments to evaluate simulation
results.
3 Cognitive Models
Pure utility theory fails to address certain decision problems. Such problems
arise when there exist conditions on the attributes reecting that they are more
or less worth in dierent situations. One example could be that a certain at-
tribute is not worth anything before a certain utility value is reached (an apart-
ment is worthless if it is not located within 10 kilometers from work). Another
example could be that the utility of a certain attribute diminishes when its
utility grows (it is more useful to come a kilometer closer to work for a person
that lives two kilometers away than for a person that lives one kilometer away).
In Cognitive Psychology other decision-making principles have been used as an
alternative to the pure utility maximization theory. Montgomery [1992] lists
the following six decision rules that can be used:
The dominance rule means that if one strategy is better than all the other strate-
3
gies in at least one attribute and at least equivalent with the other strategies in
all other attributes; then this attribute should be chosen. The dominance rule
seems to be obviously correct, but can only be used in some certain situations
when a strategy dominates the other strategies.
The conjunctive rule means that the utility in each attribute is not allowed to
be below a certain threshold value that is specic for each attribute. The con-
junctive rule may be especially suitable in situations where the strategies are
not given in advance; but are obtained gradually.
The disjunctive rule states that the utility in at least one attribute within the
chosen strategy should exceed a certain threshold value that is specic for this
attribute.
The lexicographic rule requires, unlike the previous rules, that the attributes
be ranked with respect to how important they are. Making a decision means
choosing the strategy that is best in the most important attribute.
Choice of the alternative with the most attractive value of a single attribute
means that one chooses the strategy that has the highest utility value on a sin-
gle attribute over all other attributes and strategies.
The addition rule requires both that attributes can be compared to each other
and, unlike previous rules, that it is possible to sum together the utility values.
The rule means that all the attribute utility values for a certain strategy are
added together in order to choose the strategy that yields the greatest sum.
Obviously these criteria cannot be combined so that they are all fullled at the
same time. However, we do believe that these criteria should be used as a basis
for a preference function that makes use of all these criteria to some extent.
4 Multi-Attribute Decision-Making
Traditionally, expected utility maximization for decision-making has been de-
scribed in the context of a probability distribution over the set of possible future
worlds in combination with a set of strategies [Berger 1985; Jaynes 1996; Raia
1968]. This is also the approach that we have used in our earlier work [Arnborg
et al. 2000; Brynielsson and Granlund 2001]. In this paper we describe the
utility maximization theory from a more practical perspective, meaning that
we introduce the attribute dimension. The combination of both attributes and
probability distributions is discussed in [Raia 1968]. In this paper we focus
solely on the attribute dimension in order to address problems and solutions for
utility maximization theory in the context of decision-making within Command
and Control.
In order to make a decision it does not suce to estimate probabilities and
assess possible consequences related to certain courses of actions. Also, each
possible consequence must be described with a set of attributes, i.e., cost, age,
size, and so forth. These attributes should reect an absolute value such as the
4
monetary cost, the number of units that are left, etc.
Denition 1. A utility model in the attribute domain consists of
a set S = {s
1
, . . . , s
m
} of strategies (i.e., the possible courses of actions),
a set A = {a
1
, . . . , a
n
} of attributes,
a set C of consequences dened by the consequence function h: SA
C that associates with each pair (s
i
, a
j
) a consequence c
i,j
,
a set U = {U
1
, . . . , U
n
} of utility functions U
j
: C R, which denes
the decision-makers preferences so that U
j
(c
i,j
) U
l
(c
k,l
) if and only
if the decision-maker prefers the consequence c
i,j
rather than the conse-
quence c
k,l
.

Note that the utility function is dened for each attribute, a


j
, to reect our
belief in that each attribute behaves in a dierent way.
We denote U
j
(h(s
i
, a
j
)) = U
j
(c
i,j
) = u
i,j
to be the utility value obtained by
performing strategy s
i
with respect to attribute a
j
. We can now look at the
corresponding consequence- and utility matrices, as follows:
a
1
a
2
. . . a
j
. . . a
n
s
1
c
1,1
c
1,2
. . .
s
2
c
2,1
c
2,2
. . .
.
.
.
.
.
.
.
.
.
.
.
.
s
i
c
i,j
.
.
.
.
.
.
s
m
c
m,n
U

a
1
a
2
. . . a
j
. . . a
n
s
1
u
1,1
u
1,2
. . .
s
2
u
2,1
u
2,2
. . .
.
.
.
.
.
.
.
.
.
.
.
.
s
i
u
i,j
.
.
.
.
.
.
s
m
u
m,n
The consequence matrix should be seen as an important component in a Deci-
sion Support System for Command and Control as it contains absolute values
regarding the decision situation at hand. These absolute values reect actual
measures of an attribute. In a business context, attribute 1 might be a cash
amount, attribute 2 a share of the market, attribute 3 an index of good will,
and so forth
1
. In a medical context, attribute 1 might be cost of treatment, at-
tribute 2 the number of days of extreme discomfort, attribute 3 the number of
days for recuperation with bed rest, attribute 4 probability of a relapse after the
cuto date of the analysis, and so forth. In the context of a Decision Support
System for Command and Control we primarily believe that the consequence
values, c
i,j
, are obtained automatically through modeling and simulation, but
they may also be subjective predictions.
The utility matrix, on the other hand, contains utility values, u
i,j
, that are of
interest solely because they can be compared versus each other. They do not
reect an actual measure of any kind. Also, when the decision situation changes,
the utility function, U
j
, changes. Therefore the utility matrix is most likely to
1
Examples taken from [Raia 1968].
5
change even though the consequence matrix has not changed. The conclusion
is therefore that the utility values are of theoretical interest only; whilst the
utility functions are of great interest for the decision-maker.
5 Preference- and Utility Functions for Multi-
Attribute Decision-Making
In order to rank strategies relative to each other we need a preference function,
i.e., the preference function is used to determine what strategies we prefer in
favor of other strategies. The strategy that yields the greatest value using its
preference function is chosen in a decision situation.
In this paper we assume the attributes to be independent and dene our prefer-
ence function, P, as a linear combination of the utility values of the attributes:
Denition 2. Let s
i
S and n = |A| where S and A are sets of strategies
and attributes in a utility model in the attribute domain as in Denition 1. The
preference function P : S R is then dened by
P(s
i
) =
n

j=1
U
j
(c
i,j
)
where P(s
i
) P(s
k
) if and only if the decision-maker prefers strategy s
i
rather
than s
k
, given utility functions U
1
, . . . , U
n
.
We denote P(s
i
) to be the preference value obtained by performing strategy
s
i
. Note that the preference function is dened for each strategy so that index
i is static, whilst we sum over all attributes using index j.
After stating Denition 1 and Denition 2 we have re-formulated our decision
problem into the problem of dening utility functions U
j
: C R. We now
move forward with some discussion on how this can be accomplished for various
decision situations. Our goal is to satisfy the criteria in section 3 to the highest
extent possible. All criteria can unfortunately not be met at the same time and
hence we only sketch some possible solutions.
The dominance rule will be true no matter how we design our utility functions.
This follows immediately from the denition of the rule itself and Denition 1
and Denition 2.
The conjunctive rule says that the consequence in each attribute is not allowed
to be below a certain threshold value that is specic for each attribute. To
accomplish this we represent the utility function in the form of a step function
that reects that a certain threshold value needs to be reached, i.e., if this value
is not reached the utility should be signicantly reduced. U
j
(c
i,j
) then looks
something like this:
U
j
(c
i,j
) =

f
j
(c
i,j
) if better than threshold,
K otherwize
6
where K is a large constant that overshadows all other possible outcomes. The
constant K should be chosen so that one is able to distinguish between strategies
that have an attribute below the threshold (i.e., the attribute indicates a non-
satised requirement of some kind) and strategies that are, in some sense, all
right with respect to all of its attributes. This can be accomplished by selecting
K in the following way:
K = 2 max
i

j=1
f
j
(c
i,j
)

+
where is a small positive constant. Dening K in this way also makes it possible
for us to distinguish between situations where none of the alternatives has a
sequence of consequences that reaches up to the attribute-specic threshold
value.
The disjunctive rule states that the consequence in at least one attribute within
the chosen strategy should exceed a certain threshold value that is specic for
this attribute. We now come to the situation where we have to choose; the
disjunctive rule and the conjunctive rule cannot both be satised at the same
time. We prefer to look at this problem from our research perspective - the
construction of Decision Support Systems for Command and Control. In this
context it is of great importance to discover situations that do not meet a certain
criterion in order to avoid them. Hence, the conjunctive rule is of uttermost
importance. We consider the disjunctive rule important, but it is only secondary
compared to the conjunctive rule. Therefore we notice that it is enough to
expand our model with a big constant that gives preference to a strategy with
an attribute above the threshold, as long as it has not got any other attribute not
satisfying the conjunctive rule. This means that f
j
(c
i,j
) should look something
like this:
f
j
(c
i,j
) =

k if better than threshold,


g
j
(c
i,j
) otherwize
where
k > max
i

j=1
g
j
(c
i,j
)

.
The lexicographic rule requires that the attributes be ranked with respect to
how important they are. Making a decision means choosing the strategy that
yields the best value of the most important attribute. In our context, applying
weights and thresholds that give priority to one or more attributes fullls this
decision rule. However, we do not consider this being a common situation in
Command and Control decision-making.
Choice of the alternative with the most attractive value of a single attribute
says that one chooses the strategy that has the highest utility value on a single
attribute over all other attributes and strategies. Referring to the solution that
we applied in order to satisfy the disjunctive rule, we notice that this rule is
already partly satised in a nice way. There will be problems distinguishing
strategies that are all satisfying the upper threshold at the same time, but we
do not consider this being a problem in our context.
7
The addition rule means that all the attribute utility values for a certain strategy
are added together in order to choose the strategy that yields the greatest sum.
Denition 2, where the utilities are summed together, implicitly supports this.
The addition rule also requires that attributes can be compared to each other,
which follows from Denition 1 where U
j
(c
i,j
) U
l
(c
k,l
) if and only if the
decision-maker prefers the consequence c
i,j
rather than the consequence c
k,l
.
We can now nish up and dene our utility function.
Denition 3. Let c
i,j
C where C is a set of consequences in a utility model
in the attribute domain as in Denition 1. The utility function U
j
: C R
is then dened by
U
j
(c
i,j
) =

k if better than disjunctive threshold,


K if worse than conjunctive threshold,
g
j
(c
i,j
) otherwize
where k > max
i

n
j=1
g
j
(c
i,j
)

and K = 2|k| + for any positive .


Although given as a denition, we shall remember that the look of a utility
function varies depending on time and preferences. The above scheme is only
a suggestion, and there is still interesting work to be done on the function g
j
;
probably by performing trials with real commanders.
To summarize we note that the proposed utility function:
is based on traditional Decision Theory,
fullls decision rules from the Cognitive Psychology, and therefore over-
comes diculties encountered in traditional Decision Theory,
can be applied to various decision problems by altering the function g
j
,
makes it possible for the commander to compare attributes.
We think that the number of ways that one would want to represent g
j
is limited.
Some minimum requirements are:
that g
j
should scale the consequences c
i,j
to make the values comparable
to each other,
that g
j
should determine whether the consequences dene ascending or
descending preferences,
that g
j
should give the commander the opportunity to weight attributes
against each other in a linear fashion.
In the latter chapters in this paper we put constraints on g
j
so that the function
fullls these requirements:
Denition 4. Let the Strategy Evolution Toolbox utility function be a
restriction of Denition 3, with the following constraint on g
j
:
g
j
(c
i,j
) = v
j
w
j
c
i,j
where v
j
R is the signicance factor and w
j
R the scale factor.
8
Predicted Consequences
Own
Casualties
[persons]
Preferences
Requirement
Significance
Attributes
Preference
Value
Strategies
Strategy 2
Strategy 3
3
0
Sort
30
6
Sort
57
3
Sort
73
55
Sort
View / Edit
-665,19
-50,01
Sort
Civilian
Casualties
[persons]
Enemy
Casualties
[persons]
Time on
Goal
[minutes]
View / Edit
View / Edit
View / Edit
New
Predict
Evolve
Extreme (x10) High (x3) Low (x1/3) High (x3)
5 10 0 60
Scale
The Strategy Evolution Toolbox
Sort
Strategy 4 0 4 22 50 -34,74
Strategy 1 12 7 18 45 -369,06
-1 -1 1 -0,2
Predicted Consequences
Own
Casualties
[persons]
Preferences
Requirement
Significance
Attributes
Preference
Value
Strategies
Strategy 2
Strategy 3
3
0
Sort
30
6
Sort
57
3
Sort
73
55
Sort
View / Edit
-665,19
-50,01
Sort
Civilian
Casualties
[persons]
Enemy
Casualties
[persons]
Time on
Goal
[minutes]
View / Edit
View / Edit
View / Edit
New
Predict
Evolve
Extreme (x10) High (x3) Low (x1/3) High (x3)
5 10 0 60
Scale
The Strategy Evolution Toolbox
Sort
Strategy 4 0 4 22 50 -34,74
Strategy 1 12 7 18 45 -369,06 Strategy 1 12 7 18 45 -369,06
-1 -1 1 -0,2
Figure 2: The suggested users interface to the Strategy Evolution Toolbox with
an example of four strategies evaluated according to four attributes.
The signicance factor gives the commander the ability to put preferences on
what attributes are important for each particular decision situation. The scale
factor is used to scale attributes into normalized values.
6 The Strategy Evolution Toolbox
Figure 2 suggests a users interface to the Strategy Evolution Toolbox. We will
now describe the usage of this toolbox in the context of the previously described
decision-making process (depicted in Figure 1).
The number of attributes that are to be considered in the evaluation of strategies
may be changed according to the task at hand. For each attribute the prefer-
ences of the decision-maker can be stated. Three parameters must be given as
input to the system to fully dene the utility function according to Denition 4.
First, there is the scale factor w
j
, to give the attributes comparable sizes and
signs. Second, there is the signicance factor, v
j
, to indicate how important the
attribute is compared to the other attributes. To facilitate input, there are ve
predetermined values of this parameter. Thus the signicance factor could be
set to extreme (v
j
= 10), high (v
j
= 3), medium (v
j
= 1), low (v
j
= 1/3),
or to very low (v
j
= 1/10). Third, there is the conjunctive threshold, to dene
the requirement on a strategy to even consider it as a valid option, according
to the conjunctive decision rule.
The example in Figure 2 suggests that there is a mission to ght an adversary to
achieve some goal. The adversarys casualties are, according to the preferences
in Figure 2, considered as being a desirable entity, hence the positive sign of the
scale factor. The time on goal is an undesirable entity, hence the corresponding
9
Personality Weights
30
View/Edit Strategy
Move towards alive friendly agents %
-10 Move towards alive enemy agents %
40 Move towards injured friendly agents %
10 Move towards injured enemy agents %
10 Move towards own flag %
0 Move towards enemy flag %
Strategy 4 Strategy Name:
Cancel OK
Personality Weights
30
View/Edit Strategy
Move towards alive friendly agents %
-10 Move towards alive enemy agents %
40 Move towards injured friendly agents %
10 Move towards injured enemy agents %
10 Move towards own flag %
0 Move towards enemy flag %
Strategy 4 Strategy Name:
Cancel OK
Figure 3: The View/Edit Strategy Dialogue with a defensive personality as an
example collected from the ISAAC manual.
negative scale factor. This attribute is also scaled down by one fth to make
the units, minutes and persons of the dierent attributes comparable.
Also, the preferences state that it is not acceptable to loose no more than 5
casualties among the nearby civilians, and that it is of extreme signicance that
there will be no additional civilian casualties. Furthermore, the preferences state
that no more than 10 casualties among own forces and that it is of high signi-
cance not to get any further losses. Although a desirable entity, the adversarys
casualties are of low signicance and hence there are no requirements on this
attribute. The last attribute, which is the time consumed to reach the goal,
is considered to be of high signicance when dierent strategies are compared.
The requirement on this attribute states that the time to reach the goal must
not exceed 60 minutes.
To be evaluated, new strategies may manually be introduced and edited by the
View/Edit Strategy dialogue box in Figure 3. A strategy is one of all possible
solutions that more or less satisfy the decision-making problem that the user
has dened by entering his preferences. The nature of strategies has not been
thoroughly investigated in the present study. To that end we use a simplication
of the ISAAC representation of personalities [Ilachinski 1997] as an example of
strategy representation. In the ISAAC system, the decision-making problem
may be described (admittedly with a strong simplication) as the problem of
analyzing how to use own forces in defensive or oensive actions, respectively,
considering the probable outcome of the battle. The example strategy in Figure
3 shows for instance what the ISAAC manual [Ilachinski 1997] refers to as a
defensive personality compared to other propensities. We believe, however,
that future research will show that strategies may be constituted in very dier-
ent manners to solve other kinds of decision problems.
10
By pushing the Predict button on the users interface in Figure 2, the agent-
based simulation of the battle is started and the consequences in the dierent
attributes are measured. Several simulations may be run, each time with dif-
ferent presumptions on for instance the starting points of the agents (such as
in the ISAAC system). The mean values of the measured consequences will
be displayed in the corresponding elds of the users interface. Also, for each
strategy s
i
, the preference value P(s
i
) is calculated and displayed to the user.
To facilitate the users interpretation of the results, we see, also in Figure 2, that
unsatisfactory (according to the requirements) consequence values together with
their corresponding strategy labels, are displayed in boldface. Further functions
to facilitate understanding include the possibility to sort the results according
to any of the columns. The strategies in Figure 2 have for instance been sorted
by their preference values. In the example, Strategy 4 and Strategy 3 both
satisfy the requirements, although the total utility value of Strategy 4 is slightly
better. The two remaining strategies do not fully satisfy the requirements and
are hence less preferable. Of these, Strategy 2 is preferred over Strategy 3 since
it shows only one unsatisfactory value while the latter shows two consequence
values that do not satisfy the requirements.
The user could now make the conclusion that since there are two strategies that
satisfy the requirements the decision problem is reduced to selecting one of them
for execution. The conclusion could also be made that one might do even better
if one of the strategies were improved in the light of the new understanding
gained by the evaluation process. Another conclusion that the user could make
is that the preferences ought to be changed, rather than the strategies. The user
could, for instance, nd that the signicance of the dierent attributes should
be balanced in another manner or that the requirements are too tight or too
generous.
The manual evolution of strategies described above includes selection of the
ttest strategies according to preference function P : S R, and manual im-
provement of those strategies. Such evolution could also be performed automat-
ically by the use of genetic algorithms. By letting the new strategies randomly
inherit properties from the ttest strategies of the previous generation, there will
be a successive improvement of the utility values for each generation. Genetic
algorithms provide a method to search for hypotheses (in our case the strategies)
that optimize a tness function (in our case the preference function P : S R).
The performance is well understood and there exist many successful applications
using this technique [Mitchell 1997]. The ISAAC system includes such function-
ality showing promising possibilities to automatically evolve agent personalities.
Thus we conclude that implementation of genetic algorithms to automatically
evolve strategies is a straightforward task as long as the following requirements
are satised:
Strategies can be specied in a formalized manner.
There are simulation models to measure the performance of the strategies
according to the given task.
These models are not time consuming to execute.
11
There is a preference function that is able to rank the results, i.e., calculate
a tness value for every potential strategy.
Of these requirements, we nd that the specication of accurate simulation
models is the most critical. The use of inadequate models will result in strategies
that are optimized to something else than the users preferences.
7 Conclusions
We have specied a tool that will support evolutionary development of strate-
gies that solve tasks in the Command and Control domain. The implementation
of this tool will be straightforward, except for the principles of how to formally
represent strategies, and how to predict the behavior by the use of simulation
models. These problems need more attention.
The evolutionary method supported is based on well-accepted decision-making
theories. Development of a prototype will provide the possibility to further in-
vestigate how the method and the tool may be improved. In particular, it will be
interesting to evaluate the use of dierent utility functions in practice. Further
research also includes how to take uncertainty into account while estimating the
consequences of a strategy and providing utility values.
Acknowledgments
We would like to gratefully acknowledge the help of Stefan Arnborg, the Royal
Institute of Technology, Berndt Brehmer, the Swedish National Defence College,
Qi Huang, SaabTech AB, Lars Eriksson, Ledab, and Erik Lindberg, the Swedish
Defence Research Agency, for their invaluable help in commenting on this work.
References
Stefan Arnborg, Henrik Artman, Joel Brynielsson, and Klas Wallenius. Infor-
mation Awareness in Command and Control: Precision, Quality, Utility. In
Proceedings Third International Conference on Information Fusion (FUSION
2000), pages ThB1/2532, Paris, July 2000.
James O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer-
Verlag, New York, second edition, 1985.
Joel Brynielsson and Rego Granlund. Assistance in Decision Making: Decision
Help and Decision Analysis. In Proceedings Sixth International Command and
Control Research and Technology Symposium, Annapolis, MD, June 2001.
Thomas P. Coakley. Command and Control for War and Peace. National
Defense University Press, Washington, DC, 1991.
Colleen M. Gagnon and William K. Stevens. Use of Modeling and Simulation
(M&S) in Support of Joint Command and Control Experimentation: Naval
Simulation System (NSS) Support to Fleet Battle Experiments. In Proceedings
1999 Command and Control Research and Technology Symposium, June 1999.
12
Gary E. Horne. Maneuver Warfare Distillations: Essence not Verisimilitude. In
P. A. Farrington, H. B. Nembhard, D. T. Sturrock, and G. W. Evans, editors,
Proceedings of the 1999 Winter Simulation Conference, 1999.
Andrew Ilachinski. Irreducible Semi-Autonomous Adaptive Combat (ISAAC):
An Articial-Life Approach to Land Warfare. CRM 97-61.10. Center for Naval
Analyses, VA, August 1997.
Andrew Ilachinski. Irreducible Semi-Autonomous Adaptive Combat (ISAAC).
In F. G. Homan and G. Horne, editors, Maneuver Warfare Science 1998, pages
7383. Marine Corps Combat Development Command, Quantico, VA, 1998.
Edwin T. Jaynes. Probability Theory: The Logic of Science. Preprint: Wash-
ington University, 1996.
Gary A. Klein. Strategies of Decision Making. Military Review, May 1989.
Paul R. Kleindorfer, Howard C. Kunreuther, and Paul J. H. Schoemaker. Deci-
sion Sciences: An Integrative Perspective. The Press Syndicate of the University
of Cambridge, 1993.
Tom M. Mitchell. Machine Learning. McGraw Hill, 1997.
Henry Montgomery. Decision-Making (in Swedish). In L.-G. Lundh, H. Mont-
gomery, and Y. Wrn, editors, Cognitive Psychology, chapter 6, pages 171188.
Studentlitteratur, 1992.
Roger B. Myerson. Game Theory: Analysis of Conict. Harvard University
Press, 1991.
Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. MIT
Press, Cambridge, MA, 1994.
Howard Raia. Decision Analysis: Introductory Lectures on Choices Under
Uncertainty. AddisonWesley, Reading, MA, 1968.
13

You might also like