You are on page 1of 16

ELSEVIER Deci si on Suppor t Sys t ems 15 ( 1995) 251- 266

Invited Paper
Desi gn and natural science research on information technol ogy
Sal vat ore T. March *, Ger al d F. Smi t h
Information and Decision Sciences Department, Carlson School of Management Unit~ersity of Minnesota, 271 19th Auenue South,
Minneapolis, MN 55455, USA
Abs t ract
Research in IT must address the design tasks faced by practitioners. Real problems must be properly conceptual-
ized and represented, appropriate techniques for their solution must be constructed, and solutions must be
implemented and evaluated using appropriate criteria. If significant progress is to be made, IT research must also
develop an understanding of how and why IT systems work or do not work. Such an understanding must tie together
natural laws governing IT systems with natural laws governing the environments in which they operate. This paper
presents a two dimensional framework for research in information technology. The first dimension is based on broad
types of design and natural science research activities: build, evaluate, theorize, and justify. The second dimension is
based on broad types of outputs produced by design research: representational constructs, models, methods, and
instantiations. We argue that both design science and natural science activities are needed to insure that IT research
is both relevant and effective.
Keywords." Information system research; Design science; Natural science; Information technology
1. I n t r o d u c t i o n
Researchers in Information Technology (IT)
have defined information as "dat a that has been
processed into a form that is meaningful to the
recipient and is of real or perceived value in
current or prospective actions or decisions" [[14],
p. 200]. This definition can be grounded in cogni-
tivist theories of mental representation [67]. Hu-
man thinking involves mental representations that
intendedly correspond to reality. These represen-
tations are commonly called beliefs or, when
highly validated, knowledge. They are produced
" Thi s pa pe r is an ext ens i on of i deas ori gi nal l y pr e s e nt e d in
[46]
* Cor r e s pondi ng a ut hor
when people pick up sensory inputs or stimuli
from their environment. As new information is
acquired, one' s beliefs are adjusted to better
match the perceived reality.
Human knowledge and beliefs inform actions
taken in pursuit of goals. Well-informed actions
(i.e., those based on true beliefs) are more likely
to achieve desired ends. Information is valuable
insofar as it helps individuals form true beliefs
which, in turn, promot e effective, goal-achieving
action.
Technology has been defined as "practical im-
plementations of intelligence" [[20], p. 26]. Tech-
nology is practical or useful, rather than being an
end in itself. It is embodied, as in implements or
artifacts, rather than being solely conceptual. It is
an expression of intelligence, not a product of
0167- 9236/ 95/ $09. 50 1995 El sevi er Sci ence B.V. Al l r i ght s r es er ved
SSDI 0 1 6 7 - 9 2 3 6 ( 9 4 ) 0 0 0 4 l - 7
252 S.T. March, G.F. Smith/Decision Support Systems 15 (1995) 251-266
blind accident. Technology includes the many
tools, techniques, materials, and sources of power
that humans have developed to achieve their
goals. Technologies are often developed in re-
sponse to specific task requirements using practi-
cal reasoning and experiential knowledge.
Information technology is technology used to
acquire and process information in support of
human purposes. It is typically instantiated as IT
systems - complex organizations of hardware,
software, procedures, data, and people, devel-
oped to address tasks faced by individuals and
groups, typically within some organizational set-
ting.
IT is pervasive throughout the industrialized
world. Business and government organizations
annually spend billions of dollars to develop and
maintain such systems. IT affects the work we do
and how that work is done [62]. Innovative uses
of information technologies have led to signifi-
cant improvements for some companies (such as
American Hospital Supply, Federal Express, Mrs
Fields, Frito Lay, etc.) and have defined the
competitive marketplace for others (e.g., the air-
line industry).
IT practice is concerned with the development,
implementation, operation, and maintenance of
IT systems. Development and maintenance are
largely design activities. Systems analysts, pro-
grammers, and other professionals construct arti-
facts that apply information technology to organi-
zational tasks. Applications can be as mundane as
keeping track of customer and vendor accounts
and as sophisticated as decision making systems
exhibiting human-like intelligence. Implementa-
tion and operation are processes that utilize de-
signed methods, techniques, and procedures. Not
all attempts to exploit information technologies
have had such positive results [44]. Even mun-
dane applications of information technology can
be overly expensive or have adverse affects on the
organization [26].
IT has attracted scientific attention, in part
because of its potential for dramatically impact-
ing organizational effectiveness, both positively
and negatively. Scientists have also been drawn
by the pervasiveness of IT phenomena in our
information-based society [18,19]. Scientific inter-
est in IT reflects assumptions that these phenom-
ena can be explained by scientific theories and
that scientific research can improve IT practice.
Note, however, that there are two kinds of scien-
tific interest in IT, descriptive and prescriptive.
Descriptive research aims at understanding the
nature of IT. It is a knowledge-producing activity
corresponding to natural science [27]. Prescriptive
research aims at improving IT performance. It is
a knowledge-using activity corresponding to de-
sign science [65].
Though not intrinsically harmful, this division
of interests has created a dichotomy among IT
researchers and disagreement over what consti-
tutes legitimate scientific research in the field.
Such disagreements are common in fields that
encompass both knowledge-producing and knowl-
edge-using activities. They are fostered in part by
the prestige attached to science in modern soci-
eties and the belief that the term "science" should
be reserved for research that produces theoretical
knowledge. The debate in IT research is similar
to that between engineering and the physical
sciences. Knowledge-producing, "pure" science
normally has the upper hand in such debates. In
IT, however, the situation is different. It could be
argued that research aimed at developing IT sys-
tems, at improving IT practice, has been more
successful and important than traditional scien-
tific attempts to understand it. With the issue
undecided, the field is left in an uneasy standoff.
This article proposes a framework for IT re-
search that reconciles these conflicting points of
view. This framework also suggests a research
agenda for the scientific study of IT. The next
section contains theoretical background material
needed to understand the dual nature of IT re-
search. The framework itself is presented in the
following section and is applied to IT literature,
with examples drawn primarily from the domain
of data management. The final section offers
prescriptions for future IT research.
2. Theoretical background
IT research studies artificial as opposed to
natural phenomena. It deals with human cre-
S.T. March, G.F. Smith/Decision Support Systems 15 (1995) 251-266 253
ations such as organi zat i ons and i nf or mat i on sys-
tems. Thi s has significant i mpl i cat i ons f or I T re-
search which will be di scussed later. Of i mmedi -
at e i nt erest is t hat fact t hat artificial phenomena
can be bot h cr eat ed and st udi ed, and t hat scien-
tists can cont r i but e t o each of t hese activities.
Thi s under l i es t he dual nat ur e of I T research.
Rat her t han bei ng in conflict, however, bot h ac-
tivities can be encompassed under a br oad not i on
of sci ence t hat i ncl udes two distinct species,
t er med nat ur al and desi gn science [65]. Nat ur al
sci ence is concer ned with expl ai ni ng how and why
things are. Desi gn sci ence is concer ned wi t h "de-
vising art i fact s t o at t ai n goal s" [65, p. 133].
Natural science includes t radi t i onal r esear ch in
physical, biological, social, and behavi oral do-
mains. Such r esear ch is ai med at under st andi ng
reality. Nat ur al scientists devel op sets of con-
cepts, or speci al i zed l anguage, with which t o
char act er i ze phenomena. Thes e are used in hi gher
or der const ruct i ons - laws, model s, and t heor i es
- t hat make claims about t he nat ur e of reality.
Theor i es - deep, pri nci pl ed expl anat i ons of phe-
nomena [1] - are t he crowni ng achi evement s of
nat ur al sci ence research. Pr oduct s of nat ur al sci-
ence r esear ch ar e eval uat ed against nor ms of
t rut h, or expl anat or y power. Claims must be con-
sistent with obser ved facts, t he ability t o pr edi ct
f ut ur e observat i ons bei ng a mar k of expl anat or y
success. Progress is achi eved as new t heor i es pro-
vi de deeper , mor e encompassi ng, and mor e accu-
r at e expl anat i ons.
Nat ur al sci ence is of t en viewed as consisting of
t wo activities, discovery and j ust i fi cat i on [35]. Dis-
covery is t he process of gener at i ng or pr oposi ng
scientific claims (e.g., t heori es, laws). Just i fi cat i on
i ncl udes activities by whi ch such claims are t est ed
f or validity. The discovery process is not well
under st ood. Though some have ar gued t hat t her e
is a "l ogi c" of scientific di scovery [7], mai nst r eam
phi l osophy of sci ence has historically r egar ded
di scovery as a creat i ve process t hat psychologists
may or may not be able to under st and [4]. Promis-
ing insights i nt o this phenomena have r ecent l y
been of f er ed by AI r esear ch [41].
Just i fi cat i on, on t he ot her hand, has been
heavily pr escr i bed for by phi l osopher s of science.
An initial commi t ment t o inductive logic, justify-
ing claims by accumul at i ng confi rmi ng instances,
was over t hr own by Popper ' s falsificationism [59].
Popper ar gued t hat scientists shoul d t ry t o dis-
pr ove claims since a single negat i ve i nst ance coul d
do so, whi l e i nnumer abl e confi rmi ng i nst ances
coul d not pr ove a t heor y t rue.
Scientific st udy makes ext ensi ve use of t he
hypot het i co- deduct i ve met hod. That is, t heor i es
can be t est ed i nsofar as observat i onal hypot heses
can be deduced f r om t hem and compar ed t o
r el evant empi r i cal dat a [4]. Most scientific
met hodol ogi es used by I T r esear cher s ar e pre-
scriptions for col l ect i ng and assessing dat a in this
way [34].
Wher eas nat ur al sci ence t ri es t o under st and
reality, design science at t empt s t o cr eat e things
t hat serve human purposes. It is t echnol ogy-ori -
ent ed. Its pr oduct s ar e assessed against cri t eri a of
val ue or utility - does it work? is it an i mprove-
ment ? Desi gn is a key activity in fields like archi-
t ect ur e, engi neeri ng, and ur ban pl anni ng [61] t hat
may not be t hought of as "sci ences" per se. At
t he same t i me, desi gn activities are an i mpor t ant
par t of t radi t i onal scientific fields, some scientists
conduct i ng bot h nat ur al and design sci ence inves-
tigations. The n t oo, fields like oper at i ons re-
sear ch and management sci ence ( OR/ MS ) are
heavily prescri pt i ve in i nt ent , while claiming t o be
sciences. Rat her t han pr oduci ng gener al t heor et i -
cal knowl edge, desi gn scientists pr oduce and ap-
ply knowl edge of tasks or si t uat i ons in or der to
cr eat e effect i ve artifacts. I f sci ence is activity t hat
pr oduces "cr edent i al ed knowl edge" [[51], p. 311],
t hen, following Si mon [65], design sci ence is an
i mpor t ant par t of it.
Desi gn sci ence pr oduct s are of f our types, con-
structs, model s, met hods, and i mpl ement at i ons.
As in nat ur al science, t her e is a need f or a basic
l anguage of concept s (i.e., const ruct s) wi t h whi ch
t o char act er i ze phenomena. Thes e can be com-
bi ned in hi gher or der const ruct i ons, of t en t er med
model s, used t o descri be tasks, situations, or arti-
facts. Desi gn scientists also devel op met hods, ways
of per f or mi ng goal - di r ect ed activities. Finally, t he
f or egoi ng can be i nst ant i at ed in specific product s,
physical i mpl ement at i ons i nt ended t o per f or m
cer t ai n tasks. Not abl y absent f r om this list ar e
t heori es, t he ul t i mat e pr oduct s of nat ur al sci ence
254 S.T. March, G.F. Smith/Decision Support Systems 15 (1995) 251-266
research. Rat her t han posing t heori es, desi gn sci-
ent i st s strive t o cr eat e model s, met hods, and im-
pl ement at i ons t hat are innovative and val uabl e.
Desi gn science consists of two basic activities,
build and eval uat e. Thes e paral l el t he discovery-
j ust i fi cat i on pai r f r om nat ur al science. Building is
t he process of const ruct i ng an art i fact f or a spe-
cific pur pose; eval uat i on is t he process of det er -
mining how well t he art i fact per f or ms. Like t he
discovery process in nat ur al science, t he design
sci ence build process is not well under st ood.
Significant difficulties in design sci ence resul t
f r om t he fact t hat art i fact per f or mance is r el at ed
t o t he envi r onment in which it oper at es. Incom-
pl et e under st andi ng of t hat envi r onment can re-
sult in i nappr opr i at el y desi gned art i fact s or arti-
facts t hat resul t in undesi r abl e side-effects. A
critical chal l enge in bui l di ng an art i fact is antici-
pat i ng t he pot ent i al si de-effect s of its use, and
insuring t hat unwant ed si de-effect s are avoi ded.
Eval uat i on is compl i cat ed by t he fact t hat per-
f or mance is r el at ed t o i nt ended use, and t he
i nt ended use of an art i fact can cover a r ange of
tasks. Gener al pr obl em solving met hods, for ex-
ampl e, ar e appl i cabl e to many di f f er ent pr obl ems
with per f or mance varyi ng consi derabl y over t he
domai n of appl i cat i on. Not onl y must an art i fact
be eval uat ed, but t he eval uat i on cri t eri a t hem-
selves must be det er mi ned f or t he art i fact in a
par t i cul ar envi r onment . Progress is achi eved in
design sci ence when existing t echnol ogi es are re-
pl aced by mor e effect i ve ones.
To f ur t her clarify t he nat ur al sci ence-desi gn
sci ence distinction, it can be compar ed to ot her s
commonl y made. Si mon' s [65] di st i nct i on bet ween
nat ur al and artificial phenomena, discussed at
t he begi nni ng of this section, shoul d not be con-
f used with t he nat ural -desi gn sci ence pair. Desi gn
sci ence pr oduces art i fact s and artificial phenom-
ena. However , nat ur al sci ence can address bot h
nat ur al and artificial phenomena. Nat ur al scien-
tists, for exampl e, t ry t o under st and t he funct i on-
ing of organi zat i ons, whi ch ar e artificial phenom-
ena. Likewise, chemi st s at t empt t o det er mi ne t he
pr oper t i es of synt het i c compounds, and biologists
i nvest i gat e t he behavi our s of man- made organ-
isms. Thes e are nat ur al sci ence activities even
t hough t hey t ake artificial phe nome na as t hei r
objects. Rat her t han bei ng dri ven by r esear ch
topic, t he nat ural -desi gn sci ence di st i nct i on is
based on di f f er ent r esear ch objectives. Nat ur al
science aims at under st andi ng and explaining
phenomena; desi gn sciences aims at devel opi ng
ways t o achi eve human goals.
The di st i nct i on bet ween basic and appl i ed sci-
ence is also rel evant . Thi s usually refl ect s how
closely scientific r esear ch impinges on pract i ce.
Whi l e nat ural sci ence t ends t o be basic r esear ch
and desi gn sci ence t ends t o be appl i ed, t he t wo
pai rs of concept s are not strictly parallel. A nat u-
ral sci ence account of i nf or mat i on systems fai l ure
coul d be mor e r el evant t o pr act i ce t han t he de-
vel opment of a new dat a model l i ng formal i sm.
Yet t he f or mer is nat ur al sci ence research,
wher eas t he l at t er is desi gn sci ence research.
Again, r esear ch i nt ent is critical.
Mor e rel evant is t he descr i pt i on- pr escr i pt i on
di st i nct i on f r equent l y empl oyed by deci si on scien-
tists [5]. Nat ur al sci ence is descri pt i ve and ex-
pl anat or y in i nt ent . Desi gn sci ence offers pre-
scriptions and cr eat es art i fact s t hat embody t hose
prescri pt i ons.
Havi ng di f f er ent i at ed two species of scientific
activity, it is i mpor t ant t o appr eci at e t hei r i nt er-
actions. First, desi gn science cr eat es artifacts, giv-
ing rise to phe nome na t hat can be t he t arget s of
nat ur al sci ence research. Gr oup deci si on suppor t
systems, f or exampl e, f ost er user behavi our s t hat
are t he subj ect of nat ur al science investigations
(see for exampl e, [21]).
Second, because art i fact s " have no di spensa-
tion to i gnore or vi ol at e nat ur al laws" [65, p. 6],
t hei r desi gn can be ai ded by explicit under st and-
ing of nat ur al phenomena. Thus nat ur al scientists
cr eat e knowl edge whi ch design scientists can ex-
ploit in t hei r at t empt s t o devel op t echnol ogy.
However, f r equent l y t he nat ur al laws governi ng
an art i fact and its envi r onment are not well un-
derst ood. Hence, t he const r uct ed art i fact itself
pr esent s a chal l enge t o explain how and why it
works. Nat ur al sci ence expl anat i ons of how or
why an art i fact works may lag years behi nd t he
appl i cat i on of t he artifact. In medi ci ne, for exam-
ple, t he expl anat i on of why a drug is effect i ve in
combat i ng a di sease may not be known unt i l long
af t er t he drug is in common use.
S. T. March, G.F. Smith/Decision Support Systems 15 (1995) 251-266 255
A final i nt er act i on concer ns t he j ust i fi cat i on of
nat ur al sci ence claims. Theor i es are i nt ended t o
cor r espond with or pr esent a t r ue account of
reality. However , real i t y cannot be di rect l y appr e-
hended; we only have per cept i ons and ot her rep-
r esent at i ons of such. How t hen are we t o det er -
mi ne if t heor et i cal claims are t r ue? Thi s valida-
t i on pr obl em is over come in par t by t he effect i ve-
ness of t heor i es in pract i cal appl i cat i ons [43].
Nat ur al sci ence t heor i es recei ve conf i r mat or y
suppor t f r om t he facts t hat bri dges do not col-
lapse, medi cal t r eat ment s cur e diseases, peopl e
j our ney t o t he moon, and nucl ear bombs expl ode.
I ndeed, phi l osophi cal pragmat i st s deny t he cor r e-
spondence not i on of t r ut h, proposi ng t hat t r ut h
essentially is what works in pract i ce [60]. Thus,
desi gn science provi des substantive tests of t he
claims of nat ur al sci ence research.
3. A research framework in i nformati on technol-
ogy
Pri or r esear ch f r amewor ks in I T have charac-
t er i zed specific r esear ch subjects, i dent i fyi ng sets
of vari abl es t o be st udi ed (see, e.g., [32,24,49]).
Such f r amewor ks faci l i t at e t he gener at i on of spe-
cific r esear ch hypot heses by positing i nt eract i ons
among i dent i fi ed variables. Whi l e this provi des
i nnumer abl e r esear ch quest i ons it has several
weaknesses. First, it fails t o pr ovi de di r ect i on for
choosi ng i mpor t ant i nt er act i ons t o st udy [74]; any
and all i nt er act i ons among i dent i fi ed vari abl es
are t r eat ed equally. Second, it fails t o account f or
t he l arge body of desi gn science r esear ch bei ng
done in t he field. Thi r d, it fails t o r ecogni ze t hat
I T r esear ch is concer ned with artificial phenom-
ena oper at i ng for a pur pose wi t hi n an envi ron-
ment ; t he nat ur e of t he task t o whi ch t he I T is
appl i ed is critical. Four t h, it fails to r ecogni ze t he
adapt i ve nat ur e of artificial phenomena; t he phe-
nomena itself is subj ect t o change, even over t he
dur at i on of t he r esear ch study.
Weber r ecogni zed t hat I T r esear ch is t he st udy
of art i fact s as t hey ar e adapt ed t o t hei r changi ng
envi r onment s and to changes in t hei r under l yi ng
component s [74]. We f ur t her ar gue t hat an appr o-
pr i at e f r amewor k f or I T r esear ch lies in t he i nt er-
act i on of desi gn and nat ur al sciences. I T r esear ch
shoul d be concer ned bot h with utility, as a desi gn
science, and with t heor y, as a nat ur al science.
The t heor i es must explain how and why I T sys-
t ems work wi t hi n t hei r oper at i ng envi ronment s.
Our pr oposed f r amewor k is dri ven by t he dis-
t i nct i on bet ween r esear ch out put s and r esear ch
activities (Fig. 1). The first di mensi on of t he
Research Activities
Research
Outputs
Bu i l d Evaluate Theorize Justify
Constructs
Model
Method
Instantiation
Fig. 1. A research framework.
256 S.T. March, G.F. Smith/Decision Support Systems 15 (1995) 251-266
framework is based on design science research
out put s or artifacts: constructs, models, methods,
and instantiations. The second dimension is based
on broad types of design science and natural
science research activities: build, evaluate, theo-
rize, and justify. IT research builds and evaluates
constructs, models, methods, and instantiations.
It also theorizes about these artifacts and at-
tempts to justify these theories. Building and
evaluating I T artifacts have design science intent.
Theorizing and justifying have natural science
intent.
3.1. Research outputs
Constructs or concepts form the vocabulary of
a domain. They constitute a conceptualization
used to describe problems within the domain and
to specify their solutions. They form the special-
ized language and shared knowledge of a disci-
pline or sub-discipline. Such constructs may be
highly formalized as in semantic data modelling
formalisms (having constructs such as entities,
attributes, relationships, identifiers, constraints
[31]), or informal as in cooperative work (con-
sensus, participation, satisfaction [39]). Kuhn' s
notion of paradi gm is based on the existence of
an agreed upon set of constructs for a domain
[40].
Conceptualizations are extremely important in
both natural and design science. They define the
terms used when describing and thinking about
tasks. They can be extremely val uabl e to design-
ers and researchers. Brooks [6], for example, dis-
cusses the transformation in thinking about the
software devel opment process when present ed
with the conceptualization of growing rat her t hen
building. When software is built, one expects to
specify the entire plan in advance and t hen con-
struct the software according to plan. When soft-
ware is grown, it is developed incrementally.
Functionality is added as it is needed. The soft-
ware devel oper conceives of the product as being
dynamic, constantly evolving rat her t han as a
static entity that is "compl et ed" at a point in
time.
On the ot her hand, conceptualizations can
blind researchers and practitioners to critical is-
sues. In the database area, the relational data
model [11] provided an extremely influential con-
ceptualization with which to describe data. It
conceives of data as flat tables and defines con-
cepts such as functional dependency and normal
forms with which to evaluate database structures.
It removes the consideration of physical file
structures and permits an analyst to be concerned
with "wel l -formed" logical structures. Unfort u-
nately, this formalism became so ingrained that
researchers lost sight of its shortcomings for phys-
ical database design. Research activities focused
on how to efficiently process flat tables (see, e.g.,
[52]), when it is clear that nest ed record struc-
tures and system pointers (disallowed in the rela-
tional model but available in prior database rep-
resentations [45]) are necessary to achieve effi-
cient operat i ons in many applications.
A model is a set of propositions or statements
expressing relationships among constructs. In de-
sign activities, models represent situations as
probl em and solution statements. In semantic
dat a modelling, the t erm model has been inap-
propriately used to mean dat a modelling formal-
ism. The Entity-Relationship Model [9], for ex-
ample, is a set of constructs, a data modelling
formalism. The represent at i on of an information
system' s data requi rement s using the Entity-Rela-
tionship constructs is more appropriately t ermed
a model. Such a model is a solution component to
an information requi rement s det ermi nat i on task
and a problem definition component to an infor-
mation system design task.
A model can be viewed simply as a descrip-
tion, that is, as a represent at i on of how things
are. Natural scientists oft en use the t erm model
as a synonym for theory, or propose models as
weak or incipient theories, in that they propose
that phenomena be underst ood in terms of cer-
tain concepts and relationships among them. In
our framework, however, the concern of models
is utility, not t rut h (the concern of theories is
truth, as discussed below). A semantic dat a model,
for example, is valuable insofar as it is useful for
designing an information system. Certain inaccu-
racies and abstractions are inconsequential for
those purposes. In data models, for example, the
notion of entity is pragmatically defined as an
S. 12 March, G.F. Smith/Decision Support Systems 15 (1995) 251-266 257
"arbitrary" grouping of instances [37,38]. Al-
though theoretical criteria have been proposed
for defining entities [57], the key concern is that
the entities chosen be useful in representing and
communicating information system requirements.
Although silent or inaccurate on the details, a
model may need to capture the structure of real-
ity in order to be a useful representation. Simula-
tion and mathematical models, for example, can
be of immense practical value, although lacking
in many details [66]. On the other hand, unless
the inaccuracies and abstractions inherent in
models are understood, their use can lead to
inappropriate actions.
Modelling database operations as "logical
block accesses" [70], for example, calculates the
expected number of data blocks required to sat-
isfy a database retrieval request, ignoring the
details of physical storage devices and the com-
plexities of the computer operating environment.
It is extremely useful for feasibility assessment
where the purpose is to obtain "order of magni-
tude" approximations of system performance.
However, it is inappropriate for physical database
design where the purpose is to tune the design
for efficient overall operations. In this case physi-
cal storage devices and the computer operating
environment must be more accurately repre-
sented or the model will not be able to distin-
guish the efficiency effects of various design deci-
sions [10].
To further illustrate the utilitarian concern of
constructs and models, consider the research in
expert systems where knowledge is modeled as a
set of production rules or frames [13]. Although
proposed as a model of human expertise (i.e., a
cognitive science theory), from a design science
point of view, it is irrelevant if humans actually
represent knowledge in this way. The concern is
if this type of representation is useful for the
development of artifacts to serve human pur-
poses. While expert systems research has been
criticized for abandoning "basic research on in-
telligence" (Hofstadter, quoted by Weber [74]),
its design science impacts are irrefutable.
A method is a set of steps (an algorithm or
guideline) used to perform a task. Methods are
based on a set of underlying constructs (language)
and a representation (model) of the solution space
[54]. Although they may not be explicitly articu-
lated, representations of tasks and results are
intrinsic to methods. Methods can be tied to
particular models in that the steps take parts of
the model as input. Further, methods are often
used to translate from one model or representa-
tion to another in the course of solving a prob-
lem.
Data structures, for example, combine a repre-
sentation of computer memory with algorithms to
store and retrieve data. The problem statement
specifies the existing stored data and the data to
be stored or retrieved. The method (algorithm)
transforms this into a new specification of stored
data (storage) or returns the requested data (re-
trieval). Many algorithms use tree-structured con-
structs to model the problem and its solution.
System development methods facilitate the
construction of a representation of user needs
(expressed, for example, as problems, decisions,
critical success factors, socio-technical and imple-
mentation factors, etc.). They further facilitate
the transformation of user needs into system re-
quirements (expressed in semantic data models,
behaviour models, process flow models, etc.) and
then into system specifications (expressed in
database schemas, software modules, etc.), and
finally into an implementation (expressed in phys-
ical data structures, programming language state-
ments, etc.). These are further transformed into
machine language instructions and bits stored on
disks.
The desire to utilize a certain type of method
can influence the constructs and models devel-
oped for a task. For example, desiring to use a
mathematical programming method, a researcher
may conceptualize a database design problem
using constructs such as decision variable, objec-
tive function, and constraint. Developing a model
of the problem using these constructs is itself a
design task requiring powerful methods and tech-
niques. Depending on the particulars of the
model, new mathematical programming solution
methods may need to be designed. The solution
to the model itself represents a design task for
implementation.
Natural science uses but does not produce
258 S.T March, G.F. Smith~Decision Support Systems 15 (1995) 251-266
met hods. Desi gn sci ence cr eat es t he met hodol og-
ical tools t hat nat ural scientists use. Resear ch
met hodol ogi es prescri be appr opr i at e ways t o
gat her and anal yze evi dence to suppor t (or re-
f ut e) a posi t ed t heor y [34,42]. They are human-
cr eat ed art i fact s t hat have val ue i nsofar as t hey
address this task.
An instantiation is t he real i zat i on of an art i fact
in its envi r onment . I T r esear ch i nst ant i at es bot h
specific i nf or mat i on systems and tools t hat ad-
dress vari ous aspect of desi gni ng i nf or mat i on sys-
tems. Inst ant i at i ons oper at i onal i ze const ruct s,
model s, and met hods. However , an i nst ant i at i on
may act ual l y pr ecede t he compl et e art i cul at i on of
its underl yi ng const ruct s, model s, and met hods.
That is, an I T system may be i nst ant i at ed out of
necessity, using i nt ui t i on and exper i ence. Onl y as
it is st udi ed and used are we able t o formal i ze t he
const ruct s, model s, and met hods on which it is
based.
Inst ant i at i ons demons t r at e t he feasibility and
effect i veness of t he model s and met hods t hey
cont ai n. Newell and Si mon [53] emphasi ze t he
i mpor t ance of i nst ant i at i ons in comput er science,
describing it as " an empi ri cal di sci pl i ne. " They
f ur t her state, " Ea c h new pr ogr am t hat is bui l t is
an exper i ment . It poses a quest i on t o nat ure, and
its behavi our offers clues to t he answer. " Inst an-
t i at i ons provi de worki ng artifacts, t he st udy of
which can l ead t o significant advancement s in
bot h desi gn and nat ur al science.
Gr oup Deci si on Suppor t System ( GDSS) in-
st ant i at i ons, for exampl e, wer e devel oped in or-
der t o st udy t he i mpact s of aut omat ed i nt erven-
tions on gr oup processes. As earl y GDSSs wer e
st udi ed, const ruct s (e.g., roles, anonymi t y, Type I,
II, and III GDSSs), model s (e.g., si t uat i on model s
r epr esent i ng di f f er ent pr obl em types), and met h-
ods (e.g., i dea gener at i on, i dea eval uat i on, aut o-
mat ed faci l i t at i on) [16] l eadi ng t o i mproved in-
st ant i at i ons, wer e devel oped.
As a f ur t her exampl e of t he i mpor t ance of
i nst ant i at i ons, consi der earl y r esear ch in oper at -
ing systems. An oper at i ng system manages t he
r esour ces of a comput er system. It i nst ant i at es
const ruct s such as processes, states, i nt errupt s,
and abst ract machi nes and uses met hods such as
schedul i ng al gori t hms and memor y management
algorithms. The oper at i onal i zat i on of such con-
cept s within t he UNI X oper at i ng system, " l ed a
gener at i on of soft ware desi gners t o new ways of
t hi nki ng about pr ogr ammi ng" [[2], p. 757].
3.2. Research activities
Resear ch activities in design sci ence ar e
twofold: bui l d and eval uat e. Bui l d r ef er s to t he
const r uct i on of t he art i fact , demonst r at i ng t hat
such an art i fact can be const r uct ed. Eval uat e
r ef er s to t he devel opment of cri t eri a and t he
assessment of art i fact per f or mance against t hose
criteria.
Resear ch activities in nat ur al sci ence ar e par-
allel: discover and justify. Di scover, or mor e ap-
pr opr i at el y f or I T r esear ch, t heor i ze, r ef er s t o t he
const r uct i on of t heor i es t hat expl ai n how or why
somet hi ng happens. In t he case of I T r esear ch
this is pri mari l y an expl anat i on of how or why an
art i fact works within its envi r onment . Justify
r ef er s t o t heor y proving. It r equi r es t he gat heri ng
of scientific evi dence t hat suppor t s or r ef ut es t he
t heory.
We build an art i fact t o per f or m a specific task.
The basic quest i on is, does it work? Building an
art i fact demonst r at es feasibility. Thes e art i fact s
t hen become t he obj ect of study. We bui l d con-
structs, model s, met hods, and i nst ant i at i ons. Each
is a t echnol ogy t hat , once built, must be evalu-
at ed scientifically.
We evaluate art i fact s t o det er mi ne if we have
made any progress. The basic quest i on is, how
well does it work? Recal l t hat pr ogr ess is achi eved
when a t echnol ogy is r epl aced by a mor e effect i ve
one. Eval uat i on r equi r es t he devel opment of met -
rics and t he meas ur ement of art i fact s accordi ng
t o t hose metrics. Met ri cs def i ne what we are
trying t o accomplish. They are used t o assess t he
per f or mance of an artifact. Lack of met ri cs and
fai l ure t o measur e art i fact per f or mance accordi ng
t o est abl i shed cri t eri a resul t in an inability t o
effect i vel y j udge r esear ch efforts.
Hopcr of t [30], f or exampl e, descri bes earl y re-
search in al gori t hm devel opment as bei ng " ver y
unsat i sfyi ng" due t o a lack of coher ent met ri cs by
which t o compar e per f or mance. A common ap-
pr oach to eval uat i ng an al gori t hm was t o com-
S.T. March, G.F. Smith/Decision Support Systems 15 (1995) 251-266 259
par e t he execut i on time of t he al gori t hm with
t hat of a compet i ng al gori t hm for a specific task.
The difficulty lay in t he fact t hat t he two algo-
ri t hms wer e likely wri t t en in di f f er ent l anguages
and r un on di f f er ent comput er s, yi el ded little
i nf or mat i on f r om which to j udge t hei r rel at i ve
per f or mance. Thi s mot i vat ed t he devel opment of
"wor st - case asympt ot i c per f or mance" metrics.
Thes e pr ovi ded obj ect i ve measur es of t he rel at i ve
per f or mance of al gori t hms i ndependent of t hei r
i mpl ement at i ons.
However , as not ed by Tar j an [68], met ri cs
t hemsel ves must be scrut i ni zed by exper i ment al
analysis. The al gori t hm with best "wor st - case"
per f or mance may not be t he best al gori t hm for a
task. The simplex met hod, an al gori t hm com-
monl y used t o solve l i near pr ogr ammi ng pr ob-
lems, f or exampl e, has relatively poor worst -case
per f or mance, growing "exponent i al l y with t he
number of l i near i nequal i t i es" [[36], p. 108]. The
ellipsoid met hod, on t he ot her hand, is pol ynomi -
aly bounded, a consi der abl e i mpr ovement . In
pract i ce, however, " t he number of i t erat i ons is
sel dom gr eat er t han t hr ee or f our t i mes t he num-
ber of l i near i nequal i t i es" for t he simplex algo-
rithm, wher eas t he ellipsoid met hod does not fare
nearl y so well.
Gi ven an art i fact whose per f or mance has been
eval uat ed, it is i mpor t ant t o det er mi ne why and
how t he art i fact wor ked or di d not work within its
envi r onment . Such r esear ch appl i es nat ur al sci-
ence met hods t o I T artifacts. We theorize and
t hen j ust i f y t heor i es about t hose artifacts.
Theor i es expl i cat e t he charact eri st i cs of t he
art i fact and its i nt er act i on with t he envi r onment
t hat resul t in t he obser ved per f or mance. Thi s
r equi r es an under st andi ng of t he nat ur al laws
governi ng t he art i fact and t hose governi ng t he
envi r onment in whi ch it oper at es. Fur t her mor e,
t he i nt er act i on of t he art i fact with its envi ron-
ment may l ead t o t heori zi ng about t he i nt ernal
workings of t he art i fact itself or about t he envi-
r onment .
Among ot hers, Chan, et. al. [8] concl ude t hat ,
for i nexper i enced end-users, semant i c dat a mod-
els and quer y l anguages are mor e effect i ve for
dat abase access t han t he rel at i onal dat a model
and SQL. Whi l e this is an i mpor t ant evaluative
result, t he critical quest i on for desi gners of
dat abase i nt er f aces is, why are t he semant i c dat a
model s mor e effect i ve? Nor man [56] t heor i zes
t hat human per f or mance in man- machi ne i nt er-
act i ons depends on t he "gul f " bet ween t he hu-
man' s concept ual i zat i on of t he task and t he pre-
sent ed i nt erface. When t he gulf is large, per f or -
mance is poor. When t he gul f is small, per f or -
mance is good. One coul d t heor i ze, t hen, t hat
semant i c dat a model s bet t er cor r espond t o an
end- user ' s concept ual i zat i on of a dat abase t han
does t he rel at i onal model .
Mat hemat i cal l y- based artifacts may l ead a re-
sear cher t o posit mat hemat i cal t heor ems in or der
t o expl ai n behavi our and i mprove per f or mance.
Genet i c algorithms, f or exampl e, wer e devel oped
nearl y t went y year s ago and have been effect i vel y
appl i ed to numer ous di scret e opt i mi zat i on prob-
lems [15] i ncl udi ng di st ri but ed dat abase desi gn
[47]. They wer e initially devel oped based on t he
anal ogy of nat ur al sel ect i on resul t i ng in i mpr oved
living popul at i ons [28,29]. Yet only r ecent l y has
t heor y been devel oped to expl ai n how vari ous
cont r ol par amet er s affect per f or mance [25].
Theor i zi ng in I T r esear ch must expl i cat e t hose
charact eri st i cs of t he I T art i fact oper at i ng in its
envi r onment t hat make it uni que t o I T and re-
qui re uni que expl anat i ons. Theor i zi ng t hat New-
t on' s t heor y of gravity hol ds f or IT, and t est i ng it
by dr oppi ng a PC f r om an offi ce wi ndow in t he
MIS depar t ment is obviously not valuable. Whi l e
this exampl e is ext r eme, t he issue is t hat re-
searchers shoul d not simply t est t heor i es f r om
r ef er ence disciplines in an I T cont ext unless t her e
is good r eason t o bel i eve t hat I T phenomena are
uni que in some way t hat woul d affect t he applica-
bility of t hat t heory. On t he ot her hand, wher e I T
art i fact s significantly affect t he task or t he envi-
r onment , adapt i ng t heor i es f r om r ef er ent disci-
plines can be ext r emel y frui t ful (e.g., t heor i es of
gr oup dynami cs in a GDSS envi r onment [58]).
Gi ven a gener al i zat i on or t heor y we must j us -
tify t hat expl anat i on. That is, we must gat her
evi dence t o t est t he t heory. For art i fact s based on
mat hemat i cal formal i sms or whose i nt er act i ons
with t he envi r onment are r epr es ent ed mat hemat -
ically (such as in i nf or mat i on economi cs r esear ch
or aut omat ed dat abase design), this can be done
260 S.T. March, G.F. Smith/Decision Support Systems 15 (1995) 251-266
mathematically, i.e., by using mathematics and
logic to prove posited theorems. In such cases, we
normally prove things about the performance of
the artifact based on some metric. Database de-
sign algorithms, for example, have been proven to
produce "optimal" designs in the sense that they
optimize some performance measure (e.g., mini-
mize the operating cost) for the given set of
database activities. Of course, these results are
only valid within the underlying formal problem
representation (model). Furthermore, the per-
formance metrics themselves must be justified
(e.g., is response time more significant than oper-
ating cost?).
Justification for non-mathematically repre-
sented IT artifacts follows the natural science
methodologies governing data collection and
analysis [34].
3.3. Appl i cat i on to I T research
Constructs, models, methods, and instantia-
tions are each artifacts that address some task.
Research activities related to these artifacts are:
build, evaluate, theorize, and justify. Build and
evaluate are design science research activities
aimed at improving performance. Theorize and
justify are natural science research activities
aimed at extracting general knowledge by propos-
ing and testing theories.
This four by four framework produces sixteen
cells describing viable research efforts. Research
can build, evaluate, theorize about, or justify the-
ories about constructs, models, methods, or in-
stantiations. Different cells have different objec-
tives and different methods are appropriate in
different cells. Research efforts often cover mul-
tiple cells. Evaluation of research should be based
on the cell or cells in which the research lies.
Research in the build activity should be judged
based on value or utility to a community of users.
Building the f i rst of virtually any set of con-
structs, model, method, or instantiation is deemed
to be research, provided the artifact has utility for
an important task. The research contribution lies
in the novelty of the artifact and in the persua-
siveness of the claims that it is effective. Actual
performance evaluation is not required at this
stage. The significance of research that builds
subsequent constructs, models, methods, and in-
stantiations addressing the same task is judged
based on "significant improvement," e.g., more
comprehensive, better performance. Research in
database design has numerous examples of re-
searchers extending or combining constructs,
models, methods, and instantiations developed by
other researchers (see, e.g., [47,48]).
Recognizing that there is nothing new under
the sun, "first" is usually interpreted to mean,
"never done within the discipline." The relational
data model [11], for example, was the first at-
tempt to define a formalism in which to describe
data and retrieval operations. Relations and rela-
tional operators had never before been used to
describe data, although they had been used ex-
tensively in mathematics to describe sets and set
behaviour.
While there is little argument that novel con-
structs, models, and methods are viable research
results, there is less enthusiasm in the informa-
tion technology literature for novel instantiations.
Novel instantiations, it is argued, are simply ex-
tensions of novel constructs, models, or methods.
Hence, we should value the constructs, models,
and methods, but not the instantiations. Reaction
is quite different in the computer science litera-
ture where a key determinant of the value of
constructs, models, and methods is the existence
of an implementation (instantiation).
In much of the computer science literature it is
realized that constructs, models, and methods
that work "on paper" will not necessarily work in
real world contexts. Consequently, instantiations
provide the real proof. This is evident, for exam-
ple, in AI where achieving "intelligent behaviour"
is a research objective. Exercising instantiations
that purport to behave intelligently is the primary
means of identifying deficiencies in the con-
structs, models, and methods underlying the in-
stantiation.
On the other hand, instantiations that apply
known constructs, models, and methods to novel
tasks may be of little significance. Of primary
concern is the level of uncertainty over the viabil-
ity ot the constructs, models, and methods for the
task. For example, there is no reason to believe
S. 12 March, G.F. Smith/Decision Support Systems 15 (1995) 251-266 261
that expert system technology would not be appli-
cable to the task of choosing stock investments.
The task is characterized as having a limited
number of choices, reasonably well defined selec-
tion criteria, and t here arc a number of experts
currently performing the task. Instantiating an
expert system for this task, even the first, would
be of marginal scientific significance.
Research in the evaluate activity develops met-
rics and compares the performance of constructs,
models, methods, and instantiations for specific
tasks. Metrics define what a research area is
trying to accomplish. Since "t he second" or sub-
sequent constructs, models, methods, or instanti-
ations for a given task must provide significant
performance improvements, evaluation is the key
activity for assessing such research.
Evaluation of constructs tends to involve com-
pleteness, simplicity, elegance, understandability,
and ease of use. Dat a modelling formalisms, for
example, are constructs with which to represent
the logical structure of data. Proposing a data
modelling formalism falls within the build-con-
structs cell of the framework. The database liter-
ature was subjected to a pl et hora of data mod-
elling formalisms [31]. As researchers found defi-
ciencies in existing data modelling formalisms,
they proposed additional or slightly different con-
structs to meet their specific tasks (e.g., modelling
statistical and scientific data). Finally, reviewers
screamed, "not yet another. . . "
Models are evaluated in terms of their fidelity
with real world phenomena, completeness, level
of detail, robustness, and internal consistency.
For example, numerous mathematical models
have been developed for database design prob-
lems. Posing a new database design model may
be a significant contribution; however, to inform
researchers in the field, the new model must be
positioned with respect to existing models. Oft en
existing models are ext ended to capture more of
the relevant aspects of the task. March and Rho
[47], for example, ext ended the distributed
database model posed by Cornell and Yu [12] to
include not only the allocation of data and opera-
tions, but data replication and concurrency con-
trol as well, significant issues in this area.
Evaluation of met hods considers operational-
ity (the ability to perform the i nt ended task or the
ability of humans to effectively use the met hod if
it is not algorithmic), efficiency, generality, and
ease of use. Associated with the numerous dis-
tributed database design models are numerous
methods to solve problems represent ed in those
models. While some use mathematical algorithms
(e.g., linear or non-linear programming), others
develop novel met hods (e.g., iterative algorithms).
In the former case, the met hod is not a significant
research contribution, although it may be impor-
tant to evaluate the effectiveness or efficiency of
the met hod for this particular type of probl em
(e.g., are t here characteristics of the model that
make one met hod more or less applicable?). In
the latter case, the met hod itself is a significant
research contribution, to be studied and evalu-
ated even apart from the application.
As anot her example, consider the numerous
information system devel opment methods. These
can be evaluated for ccrnpleteness, consistency,
ease of use, and the qu" Jty of results obt ai ned by
analysts applying the met hod [33,56,64].
Evaluation of instantiations considers the effi-
ciency and effectiveness of the artifact and its
impacts on the envi ronment and its users. A
difficulty with evaluating instantiations is separat-
ing the instantiation from the constructs, models,
and met hods embodi ed in it. CASE tools are an
example of instantiations. While these embody
certain constructs, models, and methods, the de-
veloper of the CASE tool must select from among
a wide array of available constructs, models, and
methods, and must decide how much latitude to
afford users [71]. It is these design choices that
differentiate CASE tools. Evaluations focus on
these differences and how they change the task of
system development.
Once metrics are developed, empirical work
may be necessary to perform the evaluation. Con-
structs, models, methods, and instantiations must
be exercised within their environments. Oft en
this means obtaining a subject group to do the
exercising. Oft en multiple constructs, models,
methods, or instantiations are studied and com-
pared. Issues that must be addressed include
comparability, subject selection, training, time,
and tasks. Met hods for this type of evaluation are
262 S. 1-. March, G.F. Smith/Decision Support Systems 15 (1995) 251-266
not unlike those for justifying or testing theories.
However, the aim is to determine "how well" an
artifact works, not to prove anything about how
or why the artifact works.
The third and fourth columns of the frame-
work correspond to natural science activities, the-
orize and justify. The theorize activity involves
explaining why and how the effects came about,
i.e., why and how the constructs, models, meth-
ods, and instantiations work. This activity at-
tempts to unify the known data (observations of
effects) into viable theory - explanations of how
and why things happen. It may involve developing
constructs with which to theorize about con-
structs, models, methods, and instantiations. The
justify activity performs empirical and/ or theoret-
ical research to test the theories posed. Such
theories, once justified, can provide direction for
the development of additional and better tech-
nologies.
As discussed earlier, Norman [55] developed
the construct of gulfs when theorizing about how
the constructs used in human-computer interac-
tion affect performance. This construct underlies
theory posed by Weber and Zhang [75] explaining
why analysts had difficulty with a particular data
modelling formalism (activity in the theorize-con-
structs cell). Its value has been substantiated by
several studies [3] (activity in the justify-con-
structs cell).
Theorizing about models can be as simple as
positing that a model used for design purposes is
true (e.g., that an expert's knowledge of a task is
accurately represented by a set of production
rules), or as complicated as developing an ex-
planatory model of an IT phenomena (e.g., repre-
senting system-task fit as a basis for user percep-
tions of satisfaction with information technology
[22,231).
The theorize-model cell also includes adapting
theories from base disciplines or developing new,
general theories to explain how or why IT phe-
nomena work. Nolan's [54] stage theory, for ex-
ample, posits that the growth pattern of an EDP
organization follows a sigmoid shape. Although
criticized as a simple application of general sys-
tems theory [74]), it does explain important phe-
nomena in the organizational appropriation of
information technologies. The theory has been
tested by various researchers, albeit with mixed
results.
As another example, DeSanctis and Poole [17]
proposed Adaptive Structuration Theory (AST)
to explain differences in how GDSS technologies
were used and in the effectiveness of that use for
different group and task characteristics. Although
tested by those researchers in a laboratory study,
they note that evidence from real groups engaged
in a diversity of real tasks over a substantial
period of time must be gathered.
For algorithmic methods, theorizing can be
formal and mathematical with logical proofs be-
ing used for justification (e.g., database design
algorithms) or it can be behavioral, explaining
why or how a method works in practice (e.g., why
and how particular system development methods
work). As discussed earlier, formal, mathematical
theories are proven only within their defined
formalism - they must be tested in practice to
validate the formalism.
Theorizing about instantiations may be viewed
as a first step toward developing more general
theories (e.g., explaining how electronic mail af-
fects communication may lead to more general
theories of the effects of technology or more
general theories about human communication) or
as the specialization of an existing general theory
(e.g., applying innovation-diffusion theory to spe-
cific information technologies such as spread-
sheets and electronic mail can facilitate under-
standing of the IT phenomena).
4. Di s c u s s i o n a n d p r e s c r i p t i o n s f or I T r e s e a r c h
Key to understanding IT research is the fact
that IT deals with artificial phenomena rather
than natural phenomena. Brooks [6, p. 12] ob-
serves, "Einstein argued that there must be sim-
plified explanations of nature because God is not
capricious or arbitrary," but concludes that, "No
such faith comforts the software engineer" be-
cause the objects of study "were designed by
different people, rather than by God." The phe-
nomena studied in IT research are artifacts that
S. l~ March, G.F. Smi t h/ Deci si on Support Systems 15 (1995) 251-266 263
are desi gned and built by man t o accompl i sh t he
pur poses of man.
Impl i cat i ons for I T r esear ch are t hreefol d.
First, t her e may not, in fact, be an underl yi ng
deep st r uct ur e t o suppor t a t heor y of IT. Our
t heor i es may need, i nst ead, to be based on t heo-
ries of t he nat ur al phe nome na (i.e., peopl e) t hat
are i mpact ed by t he t echnol ogy. Second, our arti-
facts are peri shabl e, hence our r esear ch resul t s
are peri shabl e. As needs change, t he art i fact s
pr oduced t o meet t hose needs also change. A
t heor y of how pr ogr ammer s use a now- def unct
l anguage, for exampl e, woul d be of little i nt erest .
Thi rd, we ar e pr oduci ng I T art i fact s at an ever
i ncreasi ng rat e, resul t i ng in i nnumer abl e phe-
nomena t o study. Expl i cat i ng and eval uat i ng I T
art i fact s (constructs, model s, met hods, and in-
st ant i at i ons) will faci l i t at e t hei r cat egor i zat i on so
t hat r esear ch effort s will not be wast ed bui l di ng
and st udyi ng art i fact s t hat have al ready been
bui l t and st udi ed "i n ki nd. "
Dat a model l i ng is an ar ea in which art i fact s
wer e not adequat el y expl i cat ed or eval uat ed. As a
result, r esear ch effort s pr oposed dat a model l i ng
formal i sms t hat , in essence, had al ready been
bui l t - t hey vari ed pri mari l y in t he names chosen
for const ruct s. Met ri cs were not devel oped and
subst ant i ve compar at i ve eval uat i ons wer e not
done. We still lack an accept ed set of met ri cs and
compar at i ve analysis of dat a model l i ng for-
malisms. We f ur t her lack a t heor y of const ruct s
for dat a model l i ng formal i sms, al t hough work is
begi nni ng in each of t hese areas [3,73].
Similarly in dat abase management , significant
amount s of build r esear ch have been done with-
out adequat e t heor i es t o expl i cat e and studies to
eval uat e its under l yi ng const ruct s, model s, met h-
ods, and i nst ant i at i ons. As discussed earlier, t he
rel at i onal dat a model [11] has been t he domi nant
formal i sm in bot h r esear ch and pract i ce over t he
past decade. However , it was not compar ed with
compet i ng formal i sms unt i l recent l y [8]. Had t he
rel at i onal model been compar ed to formal i sms
such as DI AM [63], for end- user dat abase i nt er-
action, it woul d have been cl ear t hat rel at i onal
l anguages wer e not as effect i ve as t he graphi cal
const ruct s in DI AM (similar to t hose in cur r ent
semant i c dat a model s), and research in end- user
l anguages coul d have shi ft ed f r om rel at i onal con-
st ruct s to semant i c const ruct s.
Wi t h t he domi nance of rel at i onal DBMSs in
pract i ce, t he per f or mance short comi ngs for cer-
tain t ypes of appl i cat i ons (e.g., CAD and engi-
neer i ng dat abases) have become appar ent . Among
ot her things, t he " next gener at i on" of DBMSs,
t he so-cal l ed Obj ect - Or i ent ed DBMSs, have re-
i mpl ement ed compl ex r ecor d st r uct ur es (at t he
physical level) and pr ovi de effi ci ent system poi nt -
ers for i nt erfi l e connect i ons, r epr esent at i ons not
unlike t hose descr i bed in DI AM and imple-
ment ed in pr e- r el at i onal DBMSs!
Cur r ent r esear ch in system devel opment can
be similarly char act er i zed. Significant at t ent i on
has been given to t he bui l d activity. Process, dat a,
and behavi our r epr esent at i ons (formal i sms) have
been bui l d and are in use (see, e.g., [31,56]).
Met hods t o bui l d model s using t hese r epr esent a-
tions have been pr oposed, al t hough t hese are
mostly ad hoc (see, e.g., [69]). CASE tools instan-
t i at e r epr esent at i ons and met hods. Thus far t her e
has been only moder at e eval uat i on activity f or
any of t hese artifacts. Several r esear cher s com-
par e and eval uat e dat a model l i ng formal i sms (e.g.,
[3,75]), met hods (e.g., [33,64]), and i nst ant i at i ons
(e.g., [71]). Addi t i onal r esear ch is needed to de-
t er mi ne what , in fact, act ual l y works in pract i ce.
Much of this work shoul d be empirical.
If, i ndeed, progress is to be made in system
devel opment , not onl y must we det er mi ne what
works, we must under s t and why it works. The r e
are virtually no gener al i zat i ons or t heor i es ex-
plaining about why and how (or even if) any of
t hese art i fact s work. Wand and Weber ' s r esear ch
[72,73] on Ont ol ogy may provi de a begi nni ng t o
such a t heory, however, significant work is needed
bef or e this can be consi der ed to be a t heor y of
system devel opment . Fur t her , any such t heor y
must be j ust i fi ed scientifically bef or e its princi-
ples can be i nt r oduced back into t he art i fact s and
t he cycle r epeat ed.
Re f e r e nc e s
[1] P. Achinstein, Concepts of Science, Johns Hopkins Press,
Baltimore, 1968.
264 S.T. March, G.F. Smith/Decision Support Systems 15 (1995) 251-266
[2] ACM Turi ng Award, Denni s Richie and Ken L. Thomp-
son, 1983 ACM Turi ng Award Recipients, Communica-
tions of t he ACM, Vol 27, No 8, August, 1984, p. 757.
[3] D. Batra, J.A. Hoffer and R.P. Bostrom, A Comparison
of User Performance Bet ween t he Rel at i onal and t he
Ext ended Ent i t y Rel at i onshi p Models in t he Discovery
Phase of Dat abase Design, Communi cat i ons of t he ACM,
(33, 2), February, 1990, pp 126-139.
[4] W. Bechtel, Philosophy of Science: An Overview for
Cognitive Science, Hillsdale, NJ: Lawrence Erl baum,
1988.
[5] D.E. Bell, H. Raiffa and A. Tversky (eds.), Decision
Making: Descriptive, Normat i ve and Prescriptive Int erac-
tions, Cambridge, UK: Cambridge University Press, 1988.
[6] F.P. Brooks, Jr., No Silver Bullet: Essence and Accidents
of Software Engi neeri ng, I EEE Comput er, April, 1987,
pp 10-19.
[7] P. Caws, The St ruct ure of Discovery, Science, Vol 166,
December 12, 1969, pp 1375-1380.
[8] H.C. Chan, K.K. Wei and K.L. Siau, Conceptual Level
Versus Logical Level User-Dat abase Int eract i on, Pro-
ceedings of t he 12th Int ernat i onal Conference on Infor-
mation Systems, New York, Dec. 1991, pp. 29-40.
[9] P.P. Chen, The Ent i t y-Rel at i onshi p Model - Toward a
Uni fi ed View of Dat a, ACM Transact i ons on Dat abase
Systems, Vol. 1, No. 1, March 1976, pp. 9-36.
[10] S. Christodoulakis, Implications of Cert ai n Assumpt i ons
in Dat abase Performance Evaluation, ACM Transact i ons
on Dat abase Systems, Vol 9, No 2, June 1984, pp 163-186.
[11] E.F. Codd, A Rel at i onal Model of Dat a for large Shared
Dat a Banks, Communi cat i ons of t he ACM, vol 13, no 6,
June 1970.
[12] D.W. Cornell and P.S. Yu, On Opt i mal Site Assignment
for Relations in t he Di st ri but ed Dat abase Envi ronment ,
I EEE Transact i ons on Software Engineering, Vol 15, No
8, August 1989, pp 1004-1009.
[13] R. Davis and D.B. Lenat, Knowledge-Based Systems in
Artificial Intelligence, New York: McGraw-Hill, 1982.
[14] G. Davis and M. Olson, Management Informat i on Sys-
tems: Concept ual Foundat i ons, Structure and Develop-
ment, 2nd Ed., McGraw-Hill, 1985.
[15] K.A. De Jong and W.M. Spears, Using Genet i c Algo-
rithms to Solve NP-Compl et e Problems, Proc 3rd Int er-
nat i onal Conference on Genet i c Algorithms, June 4- 7,
1989, Morgan Kaufmann, Publishers.
[16] G. DeSanctis and R.B. Gallupe, A Foundat i on for the
Study of Gr oup Decision Support Systems, Management
Science, Vol 33, No 5, 1987, pp. 589-609.
[17] G. DeSanctis and M.S. Poole, Capt uri ng t he Complexity
in Advanced Technology Use: Adapt i ve St ruct urat i on
Theory, Organi zat i onal Science, 1993, (to appear).
[18] P.F. Drucker, The Coming of t he New Organization,
Harvard Business Review, Vol 66, No 1, Jan-Feb, 1988,
pp 45-53.
[19] P.F. Drucker, The New Productivity Challenge, Harvard
Business Review, Vol 69, No 6. Nov-Dec, 1991, pp 45-53.
[20] F. Ferre, Philosophy of Technology, Englewood Cliffs,
NJ: Prent i ce Hall, 1988.
[21] J.F. George, G.K. Easton, J.F. Nunamaker, Jr. and G.B.
Northcraft, A Study of Collaborative Gr oup Work Wi t h
and Wi t hout Comput er-Based Support, Informat i on Sys-
t ems Research, Vol 1, No. 4, June 1988, pp. 394-415.
[22] D.L. Goodhue, I / S At t i t udes: Toward Theoret i cal and
Definitional Clarity, Dat abase, Fal l / Wi nt er 1988.
[23] D.L. Goodhue, User Evaluations of MIS Success: What
Are We Really Measuring? Proceedings of t he Hawaii
I nt er nat i onal Conference on Systems Sciences, 1992.
[24] G.A. Gorry and M.A. Scott-Morton, A Framework for
Management Informat i on Systems, Sloan Management
Review, Oct ober 1971, pp 55-70.
[25] J.J. Grefenst et t e, Opt i mi zat i on of Cont rol Paramet ers for
Genet i c Algorithms, I EEE Transact i ons on Systems, Man
and Cybernetics, Vol SMC-16, No 1, J anuar y/ Febr uar y,
1986, pp 122-128.
[26] J. Hart mani s and H. Lin, (Editors), Comput i ng t he Fu-
ture: A Broader Agenda for Comput er Science and Engi-
neering, Nat i onal Academy Press, Washington, D.C.,
1992.
[27] C.G. Hempel, Philosophy of Nat ural Science, Englewood
Cliffs, NJ: Prent i ce Hall, 1966.
[28] J.H. Holland, Adapt at i on in Nat ural and Artificial Sys-
tems, University of Michigan Press, Ann Arbor, MI,
1975. pp 66-72.
[29] J.H. Holland, Genet i c Algorithms, Scientific Ameri can,
July 1992, pp 66-72.
[30] J.E. Hopcroft, Comput er Science: The Emergence of a
Discipline, Communi cat i ons of t he ACM, Vol 30, No 3,
March 1987, pp 198-202.
[31] R. Hull and R. King, Semantic Dat abase Modelling:
Survey, Applications and Research Issues, ACM Com-
puting Surveys, Vol. 19, No. 3, Sept. 1987, pp. 201-260.
[32] B. Ives, S. Hami l t on and G.B. Davis, A Framework for
Research in Comput er-Based Management Informat i on
Systems, Management Science, Vol 26, No 9, Sept ember
1980, pp 910-934.
[33] S. Jar venpaa and J. Machesky, End User Learni ng Be-
haviour in Dat a Analysis and Dat a Modelling Tools,
Proceedings of t he 7th Int ernat i onal Conference on In-
format i on Systems, San Diego, 1986, pp. 152-167.
[34] M.A. Jenkins, Research Met hodol ogi es and MIS Re-
search, in E. Mumford, et. al. (eds) Research Met hodol o-
gies in Informat i on Systems, Elsevier Science Publishers
B.V. (Nort h Holland), 1985, pp. 103-117.
[35] A. Kaplan, The Conduct of Inquiry, New York: Crowell,
1964.
[36] R.M. Karp, Combinatorics, Complexity and Randomness
Communications of t he ACM, Vol 29, No 2, February
1986, pp. 98-111.
[37] W. Kent, Dat a and Reality, Nort h Holland, 1978.
[38] W. Kent, Limitations of Record-based Informat i on Mod-
els, ACM Transact i ons on Dat abase Systems, Vol. 4, No.
1, March 1979, pp. 107-131.
S.T. March, G.F. Smi t h/ Deci si on Support Systems 15 (1995) 251-266 265
[39] K.L. Kraemer and J.L. King, Comput er-Based Systems
for Cooperative Work and Gr oup Decision Making, ACM
Comput i ng Surveys, vol 20, no 2, June 1988, pp 115-146.
[40] T.S. Kuhn, The Structure of Scientific Revolutions, Uni -
versity of Chicago Press, 1970.
[41] P. Langley, H.A. Simon, G.L. Bradshaw and J.M. Zytkow,
Scientific Discovery: Comput at i onal Explorations of t he
Creative Processes, Cambridge, MA: MI T Press, 1987.
[42] A.S. Lee, A Scientific Methodology for MIS Case Stud-
ies, MIS Quarterly, Vol 13, No 1, March 1989, pp 33-50.
[43] J. Leplin (ed.), Scientific Realism, Berkeley, CA: Univer-
sity of California Press, 1984.
[44] S.E. Madnick, The Challenge: To Be Part of t he Solution
Inst ead of Being Part of t he Problem, Proceedings of the
Second Annual Workshop on Informat i on Technology.
Dallas Texas, December 12-13, 1992.
[45] S.T. March, Techni ques for St ruct uri ng Dat abase
Records, ACM Comput i ng Surveys, Vol 15, No 1, March.
1983, pp 45-79.
[46] S.T. March, Research Issues in Informat i on Technology,
Keynote Address, Proceedings of t he Second Annual
Workshop on Informat i on Technology Systems, Dallas,
TX, Dec. 12-13, 1992, pp. 10-16.
[47] S.T. March and S. Rho, Allocating Dat a and Operat i ons
to Nodes in Di st ri but ed Dat abase Design, I EEE Trans-
actions on Knowledge and Dat a Engi neeri ng (to appear).
[48] S.T. March and G.D. Scudder, On t he Selection of
Efficient Record Segment at i ons and Backup Strategies
for Large Shared Databases, ACM Transact i ons on
Dat abase Systems, Vol. 9, No. 3, Sept ember 1984. pp.
409-438.
[49] R.O. Mason and I.I. Mitroff, A Program for Research on
Management Informat i on Systems, Management Science.
January 1973, pp 475-485. [50] P.E. Meehl, What Social
Scientists Don' t Under st and, in D.W. Fiske and R.A.
Shweder (eds.), Met at heory in Social Science, Chicago:
University of Chicago Press, 1986, pp. 315-338.
[51] P. Mi shra and M.H. Eich, Join Processing in Rel at i onal
Databases, ACM Comput i ng Surveys, Vol 24, No 1+
March, 1992, pp 63-113.
[52] A. Newell and H.A. Simon, Comput er Science as Empiri-
cal Inquiry: Symbols and Search, Communi cat i ons of the
ACM, vol 19, no 3, March 1976, pp 113-126.
[53] A. Newell and H.A. Simon, Human Probl em Solving,
Prentice-Hall, 1972
[54] R.L. Nolan, Managi ng t he Comput er Resources: A Stage
Hypothesis, Communi cat i ons of t he ACM, vol 16, no 7,
July 1973, pp 399-405.
[55] D.A. Norman, Cognitive Engineering, in D.A. Norman
and S.W. Dr aper (eds), User Cent r ed System Design,
Lawrence Er l baum Associates, Hillsdale, NJ. 1986, pp
31-61.
[56] T.W. Olle, J. Hagelstein, I.G. Macdonald, C. Rolland,
H.G. Sol, F.J.M. Van Assche and A.A. Verrijn-Stuart,
Informat i on Systems Methodologies: A Framework for
Underst andi ng, Addison-Wesley Publishing Company,
Wokingham, England, 1988.
[57] J. Parsons and Y. Wand, Gui del i nes for Evaluating
Classes in Dat a Modelling, Proceedings of t he Thi r t eent h
Int ernat i onal Conference on Informat i on Systems, De-
cember 13-16, 1992, Dallas, TX, pp 1-8.
[58] M.S. Poole and G. DeSanctis, Microlevel St ruct urat i on in
Comput er-Support ed Gr oup Decision-Making, Human
Communi cat i on Research, vol 19, No 1, 1992, pp. 5-49.
[59] K.R. Popper, Conjectures and Refutations: The Growt h
of Scientific Knowledge, New York: Har per and Row,
1963.
[60] R. Rorty, Consequences of Pragmatism, Minneapolis,
MN: University of Mi nnesot a Press, 1982.
[6l] D.A. Schon, The Reflective Practitioner: How Profes-
sionals Thi nk in Action, Basic Books, New York, 1993.
[62] M.A. Scott-Morton, (Editor), The Corporat i on of t he
1990s: Informat i on Technology and Organi zat i onal
Transformat i on, Oxford University Press, 1990.
[63] M.E. Senko, E.B. Al t man, M.M. Ast r ahan and P.L. Fe-
hder+ Dat a St ruct ures and accessing in Dat a-Base Sys-
tems, IBM Systems Journal , Vol 12, No 1, 1973, pp
30-93.
[64] P. Shoval and M. Even-Chaime, Dat abase Schema De-
sign: An Experi ment al Compari son Between Normaliza-
tion and Informat i on Analysis, Dat abase, Vol. 18, No. 3,
Spring 1987-a, pp. 30-39.
[65] H.A. Simon, The Sciences of the Artificial (2nd ed.),
Cambridge, MA: MIT Press, 1981.
[66] A.M. Starfield and A.L. Bleloch, Building Model s for
Conservation and Wildlife Management , New York:
Macmillan, 1986.
[67] N.A. Stillings, M.H. Feinstein, J. L Garfield, E.L. Riss-
land, D.A. Rosenbaum, S.E. Weisler and L. Baker-Ward,
Cognitive Science, Cambridge, MA: MI T Press, 1987.
[68] R.E. Tarjan, Al gori t hm Design, Communi cat i ons of t he
ACM, Vol 30, No 3, March 1987, pp 205-212.
[69] T.J. Teorey, Dat abase Modelling and Design, Morgan
Kaufmann Publishers, Inc., San Mateo, CA., 1990
[70] T.J. Teorey and J.P. Fry, Design of Dat abase Structures,
Prentice-Hall, Inc., Englewood Cliffs, NJ, 1982.
[71] I. Vessey, S. Jarvenpaa and N. Tractinsky, Evaluation of
Vendor Products: CASE Tools as Methodology Compan-
ions, Communi cat i ons of t he ACM, Vol 35, No 4, April
1992, pp 90-105.
[72] Y. Wand and R. Weber, An Ontological Analysis of
Some Fundament al Informat i on Systems Concepts, Pro-
ceedings of t he Ni nt h Int ernat i onal Conference on Infor-
mation Systems, Minneapolis, MN, Nov 30-Dec 3, 1988.
[73] Y. Wand and R. Weber, Toward a Theory of t he Deep
Structure of Informat i on Systems, Proceedings of t he
El event h Int ernat i onal Conference on Informat i on Sys-
tems, Copenhagen, Denmark, December 16-19, 1990.
[74] R. Weber, Toward a Theory of Artifacts: A Paradigmatic
Base for Informat i on Systems Research, Journal of Infor-
mation Systems, Vol 1, Spring, 1987, p 3-19.
[75] R. Weber and Y. Zhang, An Ontological Eval uat i on of
NIAM' s Gr ammar for Conceptual Schema Design, Pro-
ceedings of t he Twelfth Int ernat i onal Conference on
Informat i on Systems, New York, Dec. 1991, pp. 75-82.
266 S.T. March, G.F. Smi t h/ Deci si on Support Systems 15 (1995) 251-266
Salvatore T. March is a Professor in
the Information and Decision Sci-
ences Department, Carlson School of
Management, University of Min-
nesota. He received his BS, MS, and
PhD degrees in Operations research
from Cornell University. His primary
research interests are Information
System Development, Logical and
Physical Database Design, and Infor-
mation Resource Management. His
research has appeared in Information
and Management, ACM Computing Surceys, ACM Transac-
tions on Database Systems, Communications of the ACM, I EEE
Transactions on Knowledge and Data Engineering, The journal
of MIS, Information Systems Research, Information Science,
Decision Sciences, and Management Sciem ~. He has served as
the Editor-in-chief of ACM Computing Surveys and is cur-
rently an Assiciate Editor for MI S Quarterly.
~,~ Gerald F. Smith is an Assistant Pro-
fessor in the Department of Manage-
ment, College of Business Adminis-
tration, University of Northern Iowa,
Cedar Falls, IA 50614-0125, USA. His
primary area of research is manage-
rial problem solving, with special in-
terests in problem identification and
definition, problem structures, and
quality problem solving. Past research
has appeared in Decision Support Sys-
tems, Management Science, Organiza-
tional Behavior and Human Decision Processes, Omega, and
the I EEE Transactions on Systems, Man, and Cybernetics.

You might also like