You are on page 1of 15

AI Magazine Volume 28 Number 1 (2007) (© AAAI)

Articles

Perpetual Self-Aware
Cognitive Agents
Michael T. Cox

■ To construct a perpetual self-aware cognitive agent that understands itself as well as the world
that can continuously operate with independence, around it in a meaningful way.1
an introspective machine must be produced. To Figure 1 shows a decomposition of the inter-
assemble such an agent, it is necessary to perform relationships among problem solving, compre-
a full integration of cognition (planning, under-
hension, and learning. These reasoning
standing, and learning) and metacognition (con-
trol and monitoring of cognition) with intelligent
processes share a number of intersecting char-
behaviors. The failure to do this completely is why acteristics. As indicated by the intersection
similar, more limited efforts have not succeeded in labeled A on the lower left, learning can be
the past. I outline some key computational require- thought of as a deliberate planning task with
ments of metacognition by describing a multi- its own set of learning goals (Cox and Ram
strategy learning system called Meta-AQUA and 1995). Instead of a goal to alter the world such
then discuss an integration of Meta-AQUA with a as having block X on top of block Y, a learning
nonlinear state-space planning agent. I show how goal might be to obtain a semantic segregation
the resultant system, INTRO, can independently
between concepts X and Y by altering the back-
generate its own goals, and I relate this work to the
general issue of self-awareness by machine. ground knowledge through a learning plan. As
indicated by the B intersection of figure 1, the
learning process and the story understanding
task within natural language processing share
many traits (Cox and Ram 1999b). In both
explanation is central. To understand a story
completely, it is necessary to explain unusual
or surprising events and to link them into a
causal interpretation that provides the motiva-

A
lthough by definition all AI systems can
perform intelligent activities, virtually tions and intent supporting the actions of char-
none can understand why they do what acters in the story. To learn effectively an agent
they do, nor how. Many research projects have must be able to explain performance failures by
sought to develop machines with some generating the causal factors that led to error
metacognitive capacity, yet until recently no so that similar problems can be avoided in the
effort has attempted to implement a complete, future. Here I discuss in some detail issues relat-
fully integrated, metalevel architecture. Many ed to the letter D intersection.2
in the AI community argue that base-level cog- The area labeled D represents the intersec-
nition in isolation is insufficient and that a ful- tion of planning and comprehension, normal-
ly situated agent is necessary. Similarly I claim ly studied separately. Planning is more than
that a metalevel cognitive theory must be com- generating a sequence of actions that if execut-
prehensive if it is to be successful. Previous ed will transform some initial state into a giv-
attempts, including the research of Cox and en goal state. Instead planning is embedded in
Ram (1999a, Cox 1996b), Fox and Leake (1995, a larger plan-management process that must
Fox 1996), and Murdock and Goel (2001, Mur- interleave planning, execution, and plan
dock 2001), to build introspective agents have understanding (Chien et al. 1996, Pollack and
been insufficient because of their limited Horty 1999). Furthermore while many AI sys-
extent and scope. This article examines a tems accept goals as input or have inherent
broader approach to the design of an agent background goals that drive system behavior,

32 AI MAGAZINE Copyright © 2007, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602
Articles

few if any systems strategically derive their


own explicit goals given an understanding
(that is, comprehension) of the environment.
A major objective of this article is to show Reasoning
how a preliminary system called INTRO (initial
introspective cognitive-agent) can systematical-
ly determine its own goals by interpreting and Problem Comprehension
explaining unusual events or states of the Solving C
D
world. The resulting goals seek to change the
world in order to lower the dissonance between
what it expects and the way the world is. The Planning
mechanism that INTRO uses for explanation is Story
Understanding
the same as the mechanism its metareasoning
component uses when explaining a reasoning A B
failure. The resulting goals in the latter case
seek to change its knowledge in order to reduce
the chance of repeating the reasoning failure Learning
(that is, a learning goal is to change the disso-
nance between what it knows and what it
should know).
I begin by describing the Meta-AQUA com-
prehension component that implements a the-
ory of introspective multistrategy learning. The
general idea is that an agent needs to represent
a naive theory of mind to reason about itself. It
does not need a complete and consistent Figure 1. Hierarchical Decomposition of Reasoning.
axiomization of all mental activities and com-
putations; rather it need only represent basic,
abstract patterns within its mental life like “I
was late for my appointment, because I forgot el goal is to choose a comprehension method
where I parked the car.” Next I present the (for example, script processing, case-based rea-
PRODIGY planning component that creates soning, or explanation generation) by which it
action sequences for INTRO to perform in the can understand an input. When an anomalous
Wumpus World simulated environment. I then or otherwise interesting input is detected, the
examine the current implementation of the system builds an explanation of the event,
INTRO cognitive agent and follow with a sec- incorporating it into the preexisting model of
tion that discusses the difficult choice an agent the story. Meta-AQUA uses case-based knowl-
has between doing something or learning edge representations implemented as frames
something in the face of anomalous input. I (Cox 1997) tied together by explanation-patterns
conclude by considering what it might mean (Schank 1986; Schank, Kass, and Riesbeck
for an artificial agent to be self-aware. 1994; Ram 1993) that represent general causal
structures.
Meta-AQUA uses an interest-driven, variable
Self-Understanding depth, interpretation process that controls the
and Meta-AQUA amount of computational resources applied to
the comprehension task. Consider the example
Meta-AQUA3 (Cox 1996b; Cox and Ram 1999a; in figure 2. Sentences S1 through S3 generate
Lee and Cox 2002) is an introspective multi- no interest, because they represent normal
strategy learning system that improves its sto- actions that an agent does on a regular basis.
ry-understanding performance through a But Meta-AQUA classifies S4 to be a violent
metacognitive analysis of its own reasoning action and, thus according to its interest crite-
failures. The system’s natural language per- rion (discussed in more detail later), interest-
formance task is to “understand” stories by ing. It tries to explain the action by hypothe-
building causal explanatory graphs that link sizing that the agent shot the Wumpus because
the individual subgraph representations of the she wanted to practice sport shooting. An
events into a coherent whole where an exam- abstract explanation pattern, or XP, retrieved
ple measure of coherency is minimizing the from memory instantiates this explanation,
number of connected components. The per- and the system incorporates it into the current
formance subsystem uses a multistrategy model of the actions in the story. In sentence
approach to comprehension. Thus, the top-lev- S6, however, the story specifies an alternate

SPRING 2007 33
Articles

S1: The Agent left home.


S2: She traveled down the lower path through the forest.
S3: At the end she swung left.
S4: Drawing her arrow she shot the Wumpus.
S5: It screamed.
S6: She shot the Wumpus, because it threatened her.

Figure 2. The Wumpus Story.

Description Correspondence to Figure 3


Failure Symptoms Contradiction between expected explanation and (E = Wumpus-as-target-XP)¹
actual explanation.
(A2 = Wumpus-as-threat-XP)
Faults Novel situation. outBK(M')*
Erroneous association. index: I = physical-obj.
Learning Goals Segregate competing explanations. goal 1: G1
Acquire new explanation. goal 2: G2
Learning Plan Generalize threat explanation. specific threat explanation: A2
Store and index new explanation. general threat explanation: E'
memory items: M and M'
Mutually reindex two explanations.
Plan Execution Results New general case of shoot explanation acquired. generalize(A2) E'
Index new explanation. store(E') M'
Reindex old explanation. index(M') I' = inanimate-obj
index(M) I = animate-obj

Table 1. Learning from Explanation Failure.


* Out of the set of beliefs with respect to the background knowledge (Cox 1996b, Cox and Ram 1999a).

explanation (that is, the shoot action is in retrieving an XP using the failure symptom as
response to a threat). This input triggers an a probe into memory. Meta-AQUA instantiates
expectation failure, because the system had the retrieved meta-explanation and binds it to
expected one explanation to be the case, but the trace of reasoning that preceded the failure.
another proved true instead. The resulting structure (see figure 3) is then
When Meta-AQUA detects an explanation checked for applicability. If the Meta-XP does
failure such as this one, the performance mod- not apply correctly, then another probe is
ule passes a trace of the prior reasoning to the attempted. An accepted Meta-XP either pro-
learning subsystem. Three sequences of events vides a set of learning goals (determines what
then need to occur for effective learning to take to learn) that are designed to modify the sys-
place. First, it must explain the failure by map- tem’s background knowledge or generates
ping the failure symptom to the causal fault. additional questions to be posed about the fail-
Second, it must decide what to learn by gener- ure. Once a set of learning goals is posted, they
ating a set of learning goals. Third, it must con- are passed to a nonlinear planner for building
struct a learning strategy by planning to a learning plan (strategy construction).
achieve the goal set. Figure 3 represents the following pattern.
To explain why the failure occurred, Meta- Meta-AQUA posed an initial question. In this
AQUA assigns blame by applying an introspec- case the question was “Why did the agent
tive explanation to the reasoning trace. A meta- shoot the Wumpus?” A memory probe re-
explanation pattern (Meta-XP) is developed by turned an inappropriate explanation that con-

34 AI MAGAZINE
Articles

tradicted the actual explanation from the story.


This resulted in an expectation failure and was
accompanied by a failure to retrieve the correct Inappropriate
explanation. But the agent’s memory did not Ask Explanation
contain the correct explanation, so Meta- Why
M
AQUA could not have made the right infer-
ence.
Table 1 lists the major state transitions that
the three learning processes (explaining the I
failure, deciding what to learn, and construct- Probe
(What
ing a learning strategy) produce. Figure 3 Happened)
depicts the meta-explanation structure as a G1
graph, and the third column of table 1 cross- Knowledge
Differentiation
references each row with nodes and links from
the graph. The learning plan is fully ordered to Subgoal
avoid negative interactions. For example, the
generalization step must precede the other
EF G2
steps. A knowledge dependency exists between
Knowledge
the changes on the explanation as a result of Expansion
the generalization and the use of this concept
by the indexing. After the learning is executed
and control returns to sentence processing, Missing
A2 E Mʹ Explanation
subsequent sentences concerning the shoot
predicate trigger the appropriate explanation. Actual Expected
The key to effective learning and hence
improved comprehension performance in

Meta-AQUA is that the system explicitly repre-
RF
sents not just the actions in the story but also
its own cognitive processes (for example, expla- Probe
Eʹ (What Didn’t
nation). Because such declarative knowledge Not Happen)
structures exist, it can introspectively explain Expected
anomalous or otherwise interesting mental
events. This enables it to reason about the pos-
sible factors responsible for its failure to
explain the shooting correctly. For example fig-
ure 3 shows not only what happened in the Figure 3. Meta-explanation Graph Structure for Why the Reasoner
reasoning but what did not happen. That is, it Failed to Explain Correctly Why the Agent Shot the Wumpus.
retrieved an inappropriate explanation (the
Wumpus was being used for target practice) EF corresponds to the expectation failure and
RF corresponds to the retrieval failure.
and did not retrieve the actual explanation (the
Wumpus posed a threat, and the threat was
being eliminated). Furthermore it represents a
novel situation where no item exists in memo-
ry to retrieve as opposed to one where the cor- the planning to learn approach (that is, the A
rect explanation was forgotten because the intersection of figure 1). In short a set of learn-
right memory indexes did not exist. This alter- ing operators represented in Nonlin’s4 schema
nate meta-explanation pattern (that is, forget- representation language (Ghosh et al.1992,
ting) is in Meta-AQUA’s repertoire (see Cox Tate 1976) encapsulate the strategies of gener-
1994b), but it does not apply to the current alization, abstraction, memory indexing, and
mental trace. others. The learning goals are presented to
The metacognitive process above is a form of Nonlin, and it then creates a sequence of calls
blame assignment called case-based introspec- to the learning algorithms as a standard plan-
tion (Cox 1994a), and it supports the genera- ner creates a sequence of environmental
tion of explicit learning goals. The learning actions.
goals that are attached to the Meta-XPs allow A pseudorandom story generator allows the
the system to change its background knowl- testing of the system on thousands of input
edge and hence its performance as a function examples. I tested three different systems by
of the explanation of failure. Cox and Ram feeding them 105 stories and asking them com-
(1999a) describe technical details concerning prehension questions about each one. The sys-

SPRING 2007 35
Articles

550
gy4.0. It follows a means-ends analysis back-
Introspective L earning ward-chaining search procedure that reasons
Nonintrospective L earning
500 No L earning about both multiple goals and multiple alter-
450
native operators from its domain theory.
PRODIGY uses a STRIPS-like operator represen-
400
tation language whose Backus-Naur form (BNF)
Cumulative Question Points

350 is provided in Carbonell et al. (1992). PRODI-


GY is actually a set of programs, each of which
300
explores a different facet of the planning and
250 learning processes.
The core generative planner is currently
200
Prodigy4.0. It uses the following four-step deci-
150 sion process to establish a sequence of actions
that transforms an initial state of the world
100
into the given goal state: (1) It selects a goal to
50 solve from a list of candidate goals; (2) it selects
0
an operator from a list of candidate operators
0 20 40 60 80 100 that can achieve the selected goal; (3) it selects
Story Increment
object bindings for open operator variables
from a set of candidate bindings; and (4) if
instantiated operators exist having no open
Figure 4. Cumulative Explanation Performance as a preconditions and pending goals also exist, it
Function of Metacognitive Reasoning. chooses either to apply the instantiated opera-
tor (forward chain) or to continue subgoaling
(backward chain).
A set of planning operators compose the rep-
tem scored points for each correct answer. Fur- resentation of action in the domain. Table 2
ther details of the experiment can be found in shows an operator that enables PRODIGY to
Cox and Ram (1999a). The data in figure 4 navigate forward in a grid environment where
show that the approach sketched here outper- all movement is constrained along the direc-
forms an alternative that does not use the tions north, south, east, and west. The operator
learning goals mechanism, and the alternate contains four variables: an agent; a start loca-
outperforms no learning at all. However, Cox tion and a destination location; and a facing
and Ram (1999a) report specific cases in these orientation for the agent. The operator precon-
experiments where learning algorithms nega- ditions specify that the agent is at the start
tively interact and thus nonintrospective learn- location and that the start and destination are
ing actually performs worse than the no learn- traversable. The facing and is-loc predicates
ing condition. constitute conditions that constrain search giv-
Given that the XP application algorithm is en a set of static domain axioms such as (is-loc
involved in both, Meta-AQUA understands Loc12 north Loc11) for all adjacent locations in
itself as it understands stories. That is, the the environment. An add list and a delete list
AQUA system (Ram 1991, 1993), a microver- represent the effects of the operator by chang-
sion of which exists as the performance com- ing the agent’s location from the start to the
ponent within Meta-AQUA, explains explicitly destination. PRODIGY navigates from one
represented stories using the XP application. location to another by recursively binding the
Meta-AQUA explicitly represents traces of the goal location with the destination variable and
explanation process and uses the algorithm to subgoaling until the destination can be bound
explain explanation failure and, thus, to with an adjacent cell to the current location of
understand itself through meta-explanation. the agent.
Prodigy/Analogy (Veloso 1994) implements
a case-based planning mechanism (Cox,
Awareness and PRODIGY Muñoz-Avila, and Bergmann 2006) on top of
The PRODIGY5 planning and learning archi- the Prodigy4.0 generative planner. The main
tecture (Carbonell, Knoblock, and Minton principle of case-based or analogical reasoning
1991; Veloso et al. 1995) implements a general is to reuse past experience instead of perform-
problem-solving mechanism that builds a ing a cognitive task from scratch each and
sequence of actions given a domain descrip- every time a class of problems arises. This prin-
tion, initial state, and goal conjunct. At its core ciple applies to both comprehension tasks such
is a nonlinear state-space planner called Prodi- as was the case with explanation in Meta-

36 AI MAGAZINE
Articles

(OPERATOR FORWARD
(params <agent1> <location1> <location2>)
(preconds
((<agent1> AGENT)
(<location1> LOCATION)
(<location2> (and LOCATION
(diff <location1> <location2>)))
(<orient1> ORIENTATION))
(and
(traversable <location1>)
(traversable <location2>)
(at-agent <agent1> <location1>)
(facing <orient1>)
(is-loc <location2> <orient1> <location1>)))
(effects
()
((del (at-agent <agent1> <location1>))
(add (at-agent <agent1> <location2>)))))

Table 2. PRODIGY Operator for Forward Movement in a Grid Environment.

AQUA and to problem-solving tasks such as The justification structures saved by Prodi-
planning. The task in case-based planning is to gy/Analogy begin to provide a principled
find an old solution to a past problem that is mechanism that supports an agent’s awareness
similar to the current problem and to adapt the of what it is doing and why. In particular the
old plan so that it transforms the current initial representation provides the goal-subgoal struc-
state into the current goal state (Muñoz-Avila ture relevant to each action in the final plan.
and Cox 2006). In Prodigy/Analogy the case- But beyond recording goal linkage to action
based approach takes an especially appropriate selection, Prodigy/Analogy records the deci-
direction relative to self-aware systems. sion basis and alternatives for each of the four
Carbonell (1986) argues that two types of decision points in its planning process. Like
analogical mappings exist when using past Meta-AQUA Prodigy/Analogy can reason about
experience in planning. Transformation analo- its own reasoning to improve its performance.
gy directly changes an old plan to fit the new One difference is that Prodigy/Analogy capital-
situation so as to achieve the current goals. izes upon success to learn, whereas Meta-AQUA
Alternatively derivational analogy uses a repre- depends upon failure for determining those
sentation of the prior reasoning that produced portions of its knowledge that require improve-
the old plan to reason likewise in the new con- ment. Yet both systems require an outside
text. Prodigy/Analogy implements derivational source to provide the goals that drive the per-
analogy by annotating plan solutions with the formance task. In both cases the programs halt
plan justifications at case storage time and, giv- once the achievement or comprehension goal
en a new goal conjunct, replaying the deriva- is solved.
tion to reconstruct the old line of reasoning at
planning time (Veloso and Carbonell 1994).
Figure 5 illustrates the replay process in the
INTRO
Wumpus World domain from the PRODIGY This section describes a preliminary imple-
User Interface 2.0 (Cox and Veloso 1997). mentation of an initial introspective cognitive
Veloso (1994) has shown significant improve- agent called INTRO6 that is designed to exist
ment in performance over generative planning continually and independently in a given envi-
using such plan rationale. ronment. INTRO can generate explicit declara-

SPRING 2007 37
Articles

INTRO agent can perform actions to change


the environment including turn, move ahead,
pick up, and shoot an arrow. Unlike a classical
planning domain, the environment is not ful-
ly observable, but rather the agent perceives it
through percepts. The percepts consist of the 5-
tuple [stench, breeze, glitter, bump, scream] that
represents whether the Wumpus is nearby, a pit
is nearby, gold is colocated with the agent, an
obstacle is encountered, and the Wumpus is
making a sound.
For the purposes of this example, the envi-
ronment has been limited to a four by two cell
world (see figure 7). The agent, depicted as the
character 1, always starts in cell [1,1] facing east
at the southwesternmost end of the grid. The
Wumpus (W), pits (O), and the gold ($) can be
placed in any of the remaining cells. In a major
change to the interpretation of the game, the
Wumpus is rather benign and will not injure
the agent. It screams when it is hungry and an
agent is adjacent, as well as when it is shot with
an arrow. The agent can choose either to feed
the Wumpus or to shoot it with the arrow.
Either will prevent continuous screaming.
The original Wumpus World agent maps a
given input tuple to an output action choice by
following an interpretation program and a
memory of previous percepts and actions. The
Figure 5. Prodigy/Analogy Replays a Past Derivational
agent control code was modified to accept
Trace of Planning on a New Problem.
instead an action taken from plans output by
the Prodigy/Agent component of INTRO. The
tive goals that provide intention and a focus
simulator then simply presents a visualization
for activities (see Ram and Leake [1995] for a
of the events as the actions execute. As the
functional motivation for the role of explicit
implementation currently stands, the output
goals). This work is very much in the spirit of
of the simulator is not used. Ideally (as shown
some recent research on persistent cognitive
in the dashed arrow of figure 6) the 5-tuple per-
assistants (for example, Haigh, Kiff, and Ho
cept output should be input into the perceptu-
[2006]; Myers and Yorke-Smith [2005]), but it al component. The one exception is that the
examines issues outside the mixed-initiative simulator was changed so that when the agent
relationship with a human user. enters the same grid cell as the Wumpus, the
The agent itself has four major components sound state of the Wumpus will change to
(see figure 6). INTRO has primitive perceptual scream. Code was added so that this action is
and effector subsystems and two cognitive reported to Meta-AQUA as it occurs.
components. The cognitive planning and
understanding subsystems consist of the Prodi- The Perceptual Subsystem
gy/Agent and Meta-AQUA systems respective- The perception subsystem [sic] does not model
ly. The base cognitive cycle is to observe the a realistic perceptual filtering of the world and
world, form a goal to change the world, create its events. Instead the module in its present
a plan to achieve the goal, and finally act in form acts as a translator between the represen-
accordance with the plan, observing the results tation of Prodigy/Agent and Meta-AQUA. It
in turn. serves more as a means for input than it does
for perception.
The Wumpus World The main problem of translation is one of
INTRO operates in a very simple environment mapping a flat STRIPS operator to an arbitrari-
described by Russell and Norvig (2003). The ly deep, hierarchical, slot-filler event. For
Wumpus World7 contains an agent whose goal example the action FORWARD(agent1,loc11,
is to find a pot of gold while avoiding pits and loc12) must translate into an appropriate hier-
the Wumpus creature. Unlike the Wumpus, the archical frame representation. Problems exist

38 AI MAGAZINE
Articles

when the parameters do not match in terms of


both names and content, when they differ in
relative order, or when extraneous parameters
exist. For example the Meta-AQUA frame defi- effector subsystem perceptual
subsystem
nition in table 3 does not directly correspond Wumpus
to the PRODIGY operator in table 2. To resolve Environment S Translate
such problems, a mapping function exists for Simulator
translation.

The Meta-AQUA Comprehension
Component Plan
Understanding the actions of a story and the
reasons why characters perform such actions is
very similar to comprehending the actions and
motivations of agents in any environment. So Prodigy/Agent Meta-AQUA
Goal
within the INTRO cognitive agent, Meta-AQUA
performs this task to understand the results of
its own actions using the approach described
in the self-understanding and Meta-AQUA sec-
tion. In both cases Meta-AQUA inputs the
events in a conceptual representation and
builds an internal model to reflect the causal Figure 6. INTRO Architecture.
connections between them. In both cases an
anomaly or otherwise interesting event causes
Meta-AQUA to generate an explanation of the
event. However, instead of using the explana-
tion to modify its knowledge, INTRO uses the A
explanation to generate a goal to modify the
environment. Stated differently, its central role
is that of problem recognition. That is, how can a
persistent agent recognize when a novel prob-
lem exists in the world and form a new goal to
resolve it?
As is shown in the example output of figure
8, Meta-AQUA processes three forward move-
ments that are not interesting in any signifi-
cant way. These actions (displayed in the low-
B
er right Prodigy/Agent plan window) are |-----|-----|-----|-----|
therefore skimmed and simply inserted into 2 | | | O | W$ |
the model of the Wumpus World actions. How- |-----|-----|-----|-----|
ever when the system encounters the scream
action, it processes it differently, because sex,
1 | 1 | | | |
violence, and loud noises are inherently inter- |-----|-----|-----|-----|
esting (Schank 1979).8 Notice that Meta-AQUA 1 2 3 4
presents two large shaded windows that dis-
play internal representations of the cognitive
processing. In the “Goal Monitor” window (far
left of figure), the system shows a goal (that is, Figure 7. Initial Wumpus World State.
ID.3206) to identify interesting events in the
A. Pictorial representation. B. ASCII representation.
input. The scream input causes Meta-AQUA to
spawn a new goal (that is, GENERATE.3264) to
generate an explanation for the scream. This
goal is a knowledge goal or question whose
answer explains why the Wumpus performed
the action (Ram 1991).9 The “Memory Moni-
tor” window (upper right) displays a represen-
tation of the memory retrieval and storage
activities along with the indexes and mental
objects that occupy memory.

SPRING 2007 39
Articles

(define-frame FORWARD
(isa (value (ptrans)))
(actor (value (agent)))
(object (value =actor))
(from (value (at-location
(domain (value =actor))
(co-domain (value (location))))))
(to (value (at-location
(domain (value =actor))
(co-domain (value (location)))))))

Table 3. Frame Definition for Forward Movement.

As a result of this activity, the Meta-AQUA tains a set of antecedents (rather than the sin-
component attempts to explain the scream. gle antecedent ␣) called the XP asserted nodes
The background memory contains a conceptu- (Ram 1993). Each of them must be true for the
al network, a case/script library, a set of causal explains node (the event being explained) to
explanation patterns (XPs) and meta-explana- hold. If any are removed by achieving their
tion patterns (Meta-XPs), and a suite of learn- negation, then the causal structure will no
ing algorithms. Meta-AQUA then retrieves a longer hold. So in principle our simple Wum-
simple XP of the form ␣→␤ such that the pus example can generalize to more complex
antecedent (alpha) is hunger and the conse- behavior. Whether it scales well is another
quent (beta) is screaming. That is, the Wumpus (future) issue. However, using this mechanism,
screams, because it is hungry (XP-WUMPUS- Meta-AQUA has successfully processed thou-
SCREAMS.3432). sands of short randomly generated stories (Cox
Now the system attempts to resolve the situ- 1996a) similar to the hundred stories from fig-
ation. Usually Meta-AQUA will seek a change ure 4.
of its knowledge to compensate for an appar-
ent anomaly in a situation. The assumption is The Prodigy/Agent Planning
that the observer is passive. INTRO entertains Component
instead that a goal can be spawned to resolve Prodigy/Agent10 (Cox et al. 2001, Elahi and
the anomaly by planning and executing Cox 2003) is an independent state-space plan-
actions that remove the antecedent of the XP. ning agent that uses a predefined communica-
Once the antecedent is gone, the screaming tion protocol represented in KQML to accept
will cease. Thus the resulting goal is to remove planning requests and to return a sequence of
the hunger, and the goal is passed to the Prodi- actions that achieve the planning goals. It is
gy/Agent component. built around the PRODIGY planning and learn-
The algorithm is defined more formally in ing architecture described in the “Awareness
table 4. Cox and Ram (1999a) provide techni- and PRODIGY” section.
cal details related to steps 1 and 2. Planners are traditionally given specific goals
Step 3 determines the substitution set with to achieve by generating a sequence of actions
which to instantiate the explanation of the that alters the physical environment. Yet a per-
event. Step 4 checks the instantiated precondi- petual agent should be able to generate its own
tions of the explanation. If the preconditions goals. We as humans have expectations about
hold, the negation of the explanation’s the world, how it should behave, and how we
antecedent is posted as a goal. like it. When we detect something anomalous
Although this example is extremely simple that violates these expectations, we attempt to
and rather contrived, more realistic and explain the situation to make sense of the situ-
detailed examples exist. In general, an XP con- ation. Given a satisfactory explanation, the

40 AI MAGAZINE
Articles

Figure 8. INTRO Output and Interface.

anomaly will be resolved. Hence we have an in the same cell and the Wumpus is hungry, it
opportunity to learn something new about the screams. This event then triggers the explana-
world. Similar situations should no longer tion, which leads to the new goal of not having
appear to be anomalous for us in the future. the Wumpus hungry and to feeding the Wum-
Given an unsatisfactory explanation, however, pus and stopping the screaming. The INTRO
we may determine that something needs to be program then halts. Nevertheless this sequence
done to make the situation more to our liking. outlines a mechanism that relaxes the depend-
The result is a self-determined planning goal. ency upon user-supplied goals; it starts to pro-
So given the goal to remove the hunger state vide a locus of inquiry regarding the origin of
and the initial state of the world by the Meta- independent planning goals and offers up a
AQUA module, Prodigy/Agent generates a new knowledge-level alternative to the blind maxi-
plan containing the action of feeding the mization of value functions and the program-
Wumpus. When this plan is executed, no mer’s burden of supplying all possible back-
anomalous event occurs, because the reason ground goals.
for the unexpected behavior is no longer pres- The agent enters the location and can simply
ent in the environment. That is, in response to grab the gold to achieve its given goal. The
an active environment, INTRO generates its Wumpus is benign and does not pose an actu-
own goals to change the world. al threat to the goal, but something is missing
Now to be clear, INTRO begins this task by in this interpretation. The screaming demands
providing Prodigy/Agent the standard goal to attention and a response. But why? Why do we
possess the gold, and this goal is given without stop to help a stranger in need or stoop to pick
appeal to experience. PRODIGY then creates a up a piece of garbage along the curb when we
plan to enter the hex with the gold, look for it, recognize such problems in an otherwise unre-
and grab it (thereby possessing it). However lated context? Our implementation starts to
when the agent enters the cell with the gold, answer these questions, but just barely. A
the Wumpus is there also. Because the agent is rational person would not decide to stop loud

SPRING 2007 41
Articles

1. Detect interesting event, βʹ


2. Retrieve an explanation, E: , α → β, that covers anomaly
3. σ ← Unify(β, βʹ)
4. Verify Subst(σ, precond(E))
5. Post goal, G = Achieve (¬ Subst(σ, α))

Table 4. Explanation-Based Goal Formulation.

traffic noises by putting sugar in gas tanks, giv- of screaming is not the problem that needs to
en the explanation that car engines make noise be resolved in the Wumpus World scenario?
only when having clean fuel. This is the case Currently the system is simply hard-coded
despite the fact that if sugar is in all of the gas to automatically generate achievement goals
tanks, the fuel is not clean, and the noise will and then to plan for them. Future research
cease. Such a strategy is impractical and has remains to implement a sufficient decision
undesirable consequences. Yet one might call process to differentiate the two goal types. But
the police on a single loud muffler. But note consider that clues will exist in the context
also that the choice of notifying the police has provided by the series of events experienced
nothing to do with the direct mechanical and the introspective trace of the reasoning
explanation of why the car is making the noise. that accompanies action and deliberation in
The police choice follows from an explanation the environment. Besides it is stretching our
of social convention, volitional decision mak- imagination to entertain that somehow modi-
ing, legal responsibilities, and agency. fying our definition of a screaming event will
In the limit, we wish for an agent that can result in the Wumpus not screaming in the
tinker. It must be able to push on and explore future. Wishing that the Wumpus not be loud
the boundaries of its own knowledge. An agent does not make it so.
needs to discover some of the interactions Moorman (1997; Moorman and Ram 1999)
between actions and animate objects, the presents a theory of reading for science fiction
causal connections between agents in a social and similar stories that require the willing sus-
network, and the relationship between the pension of (dis)belief. The theory presents a
physical and mental environments in which it matrix of conceptual dimensions along which
exists to be able to effectively explain the a knowledge structure can be moved to ana-
observations it encounters and to create new logically map between inputs. Thus to view a
goals or to change existing goals11 as a result. robot as human is a smaller shift in the matrix
than is to view a robot somehow as a space-
time event. Understanding science fiction
Between Learning Goals and requires such conceptual shifts, even though
Achievement Goals the reasoner may not believe that a robot is tru-
ly human (a living organism).
One of the most significant outstanding issues When understanding events and objects in
involved with the research presented here any environment (fictional or not), judgments
relates to distinguishing the requirements asso- as to the reasonableness of possibilities do exist
ciated with learning goals and with achieve- in the natural world. Thus it is rational to con-
ment goals. When a system understands that sider feeding the Wumpus, because an action
its knowledge is flawed, it needs to generate a actually exists to achieve the goal; whereas the
goal to change its own knowledge so that it is alternative is too strange. A system might also
less likely to repeat the reasoning error that make the decision to generate an achievement
uncovered the flaw. When the world is goal over a learning goal based upon its experi-
“flawed,” a system needs to generate a goal to ence with general screaming events and the rel-
achieve an alternative state of the world. The ative certainty of such knowledge. Note that to
main issue is detecting the conditions under do so, a system must evaluate its own knowl-
which a system does the latter versus the for- edge, experience, and capability; it must use or
mer. How does INTRO know that its knowledge create metaknowledge.

42 AI MAGAZINE
Articles

Problems exist with these speculations, how- tions, and functions in the environment. Here
ever. For example, basing a decision upon the I claim that acute awareness of the world
fact that the system can execute a plan to feed implies being able to comprehend when the
the Wumpus requires that the system reason world is in need of change and, as a result,
about the structure of a plan before the plan is being able to form an independent goal to
computed. Similarly to reason about the poten- change it.
tial learning goal (to change the concept of Likewise, being self-aware is not just perceiv-
screaming) requires the system to consider ing the self in the environment, nor is it simply
steps in a learning plan before the system can possessing information about the self; rather it
perform the planning. In either case the solu- is self-interpretation (see also Rosenthal [2000]).
tion is not to be found solely within the tradi- It is understanding the self well enough to gen-
tional automated planning community nor erate a set of explicit learning goals that act as
within the knowledge representation commu- a target for improving the knowledge used to
nity. Rather the capability to determine the cat- make decisions in the world. It is also under-
egory of goal worth pursuing is tightly coupled standing the self well enough to explain our-
with metacognitive activity. selves to others (for example, see Johnson
[1994]; Core et al. [2006]; van Lent, Fisher, and
Mancuso [2004]). To equate self-awareness
Self-Awareness with conscious direct experience is missing the
A renewed interest exists in machines that point. Many nonconscious correlates such as
have a metacognitive or introspective capacity, implicit memory are highly associated with
implement metareasoning, incorporate meta- self-awareness and metacognition (Reder and
knowledge, or are otherwise in some way self- Schunn 1996). Some (for example, at the
aware (see Cox [2005] for a thorough review). DARPA Workshop on Self-Aware Computer
Yet little consensus exists in the AI community Systems [McCarthy and Chaudri 2004]) have
as to the meaning and use of such mental suggested that self-aware systems are linked in
terms, especially that of self-awareness. But a special way with metacognition. But if one
before considering what it might mean for a follows a straightforward definition of
machine to be self-aware, consider what it metacognition as cognition about cognition,
means to be aware at all. A weak sense of the then representing a trace of reasoning and then
word does exists. For example your supervisor reasoning about the trace is sufficient. Prodi-
may tell you that “I am aware of your prob- gy/Analogy does represent the rationale for its
lem.” Here awareness appears to be less empa- planning decisions and can reason about the
thetic that in the statement “I understand your rationale when applying further planning. Yet
problem.” In the first sense awareness is simply the program has no reference to itself other
a registration of some state or event, ignoring than the annotations of justifications on its
for the moment consciousness. Awareness search tree nodes. Meta-AQUA represents goals
need not be magical. as relations between a volitional agent (itself)
Consider then what it means to be aware in and the state it desires. Yet it never relies upon
the sense of understanding the world. To the symbol for itself in any of its processing at
understand it is not simply to classify objects the base level or metalevel. However, to reason
in the environment into disjunct categories. about the self without an explicit structural
Rather it is to interpret the world with respect representation of the self seems less than satis-
to the knowledge and experience one (human factory.
or machine) currently has in memory. The Thus, as currently implemented, INTRO is
case-based reasoning community suggests that just that, an introduction. What is required is a
it is to find a piece of knowledge, schema, or more thorough integration of the metacogni-
case most relevant to its conceptual meaning tive knowledge and processes in a system like
and to apply it to the current situation so that INTRO. This article and the INTRO implemen-
a new structure can be built that provides tation have concentrated on a cognitive inte-
causal linkages between what has already gration of planning, comprehension, and
occurred and what is likely to occur next; that learning; a metacognitive integration between
is, it provides causal explanation and expecta- Meta-AQUA and PRODIGY remains unfin-
tion (Kolodner 1993; Leake 1992; Ram 1993; ished.
Schank 1986; Schank, Kass, and Riesbeck
1994). Understanding or awareness is not just Acknowledgments
perceiving the environment. It is certainly not I thank Sylvia George for the help in imple-
logical interpretation as a mapping from sys- menting the translation code and for the mod-
tem symbols to corresponding objects, rela- ifications to the Wumpus World code. She also

SPRING 2007 43
Articles

wrote much of the PRODIGY domain 10. See www.mcox.org/Prodigy-Agent/ for egy Learning: Constructing a Learning
specification for the Wumpus World a public version of the implemented sys- Strategy under Reasoning Failure, Technical
problem and contributed some of the tem, the user manual, and further details. Report, GIT-CC-96-06. Ph.D. dissertation,
INTRO-specific frame representations 11. See Cox and Veloso (1998) and Cox and College of Computing, Georgia Institute of
Zhang (2005) for a theory of goal change in Technology, Atlanta, GA (hcs.bbn.com/per-
to Meta-AQUA so that it could process
planning sonnel/Cox/thesis/).
and explain the scream event. Finally
Cox, M. T. 1994a. Case-based Introspec-
I thank Mark Burstein, David McDon-
ald, Kerry Moffitt, John Ostwald, and
References tion. In Proceedings of the Twelfth National
Brachman, R. J. 2002. Systems That Know Conference on Artificial Intelligence, 1435.
the anonymous reviewers for their Menlo Park, CA: AAAI Press.
What They Are Doing. IEEE Intelligent Sys-
helpful comments. Cox, M. T. 1994b. Machines That Forget:
tems 17(6)(November–December): 67–71.
Learning from Retrieval Failure of Mis-
Notes Carbonell, J. G. 1986. Derivational Analo-
indexed Explanations. In Proceedings of the
gy: A Theory of Reconstructive Problem
1. But see also Singh (2005) and Minsky, Sixteenth Annual Conference of the Cognitive
Solving and Expertise Acquisition. In
Singh, and Sloman (2004) for a comple- Science Society, 225–230. Hillsdale, NJ:
Machine Learning: an Artificial Intelligence
mentary approach. Brachman (2002) has Lawrence Erlbaum Associates.
Approach: Volume 2, ed. R. Michalski, J. Car-
also called for a greater emphasis upon the
bonell, and T. M. Mitchell, 371-392. San Cox, M. T.; Edwin, G.; Balasubramanian,
integration of cognitive and metacognitive
Francisco: Morgan Kaufmann Publishers. K.; and Elahi, M. 2001. Multiagent Goal
capabilities.
Carbonell, J. G.; Blythe, J.; Etzioni, O.; Gil, Transformation and Mixed-Initiative Plan-
2. Cox (1996b, pp. 294–299) discusses some ning Using Prodigy/Agent. In Proceedings of
Y.; Joseph, R.; Kahn, D.; Knoblock, C.;
of the intersections between problem solv- the 5th World Multiconference on Systemics,
Minton, S.; Perez, A.; Reilly, S.; Veloso, M.;
ing and comprehension represented in the Cybernetics and Informatics, Vol. VII, ed. N.
and Wang, X. 1992. PRODIGY 4.0: The
letter C region in the top center of the fig- Callaos, B. Sanchez, L. H. Encinas, and J. G.
Manual and Tutorial, Technical Report,
ure. For example, a problem solver must be Busse, 1–6. Orlando, FL: International Insti-
CMU-CS-92-150, Computer Science Dept.,
able to monitor the execution of a solution tute of Informatics and Systemics.
Carnegie Mellon University, Pittsburgh, PA.
to confirm that it achieves its goal. If the
Carbonell, J. G.; Knoblock, C. A.; and Cox, M. T.; Muñoz-Avila, H.; and
comprehension process determines that
Minton, S. 1991. PRODIGY: An Integrated Bergmann, R. 2006. Case-based Planning.
the goal pursuit is not proceeding as
Architecture for Planning and Learning. In Knowledge Engineering Review 20(3): 283-
desired, then the execution failure must be
Architectures for intelligence: The 22nd 287.
addressed and the solution changed.
Carnegie Mellon Symposium on Cognition, ed. Cox, M. T., and Ram, A. 1999a. Introspec-
3. The Meta-AQUA home page is at meta-
K. Van Lehn, 241–278. Hillsdale, NJ: tive Multistrategy Learning: On the Con-
aqua. mcox.org. Version 6 of the code
Lawrence Erlbaum Associates. struction of Learning Strategies. Artificial
release, related publications, and support
Chien, S.; Hill, Jr., R.; Wang, X.; Estlin, T.; Intelligence 112(1–2): 1–55.
documents exist there.
Fayyad, K.; and Mortenson, H. 1996. Why Cox, M. T., and Ram, A. 1999b. On the
4. The University of Maryland Nonlin
Real-World Planning Is Difficult: A Tale of Intersection of Story Understanding and
home page exists at www.cs.umd.edu/pro- Two Applications. In New Direction in AI Learning. In Understanding Language Under-
jects/plus/Nonlin/. Planning, ed. M. Ghallab and A. Milani, standing: Computational Models of Reading,
5. The PRODIGY home page exists at 287–298. Amsterdam: IOS Press. ed. A. Ram and K. Moorman, 397–434.
www.cs. cmu.edu/afs/cs.cmu.edu/pro- Core, M. G.; Lane, H. C.; van Lent, M.; Cambridge, MA: The MIT Press/Bradford
ject/prodigy/Web/ Gomboc, D., Solomon, S.; and Rosenberg, Books.
prodigy-home.html. M. 2006. Building Explainable Artificial Cox, M. T., and Ram, A. 1995. Interacting
6. The INTRO home page exists at meta- Intelligence Systems. In Proceedings of the Learning-Goals: Treating Learning as a
aqua.mcox. org/intro.html. Eighteenth Conference on Innovative Applica- Planning Task. In Advances in Case-based
7. The code we modified is located at tions of Artificial Intelligence. Menlo Park, Reasoning, ed. J.-P. Haton, M. Keane, and M.
aima.cs.berkeley.edu/code.html. See the CA: AAAI Press. Manago, 60–74. Berlin: Springer.
acknowledgments. Cox, M. T. 2005. Metacognition in Compu- Cox, M. T., and Veloso, M. M. 1998. Goal
8. The system also finds all anomalies that tation: A Selected Research Review. Artificial Transformations in Continuous Planning.
diverge from expectations to be interesting Intelligence 169(2): 104–141. Paper presented at the 1998 AAAI Fall Sym-
as well as concepts about which it has Cox, M. T. 1997. An Explicit Representation posium on Distributed Continual Plan-
recently learned something. In any case the of Reasoning Failures. In Case-Based Rea- ning, Orlando, FL.
situation is surprising, and this constitutes soning Research and Development: Second Cox, M. T., and Veloso, M. M. 1997. Sup-
sufficient grounds for being interesting and International Conference on Case-Based Rea- porting Combined Human and Machine
in need of explanation. soning, ed. D. B. Leake and E. Plaza, 211– Planning: An Interface for Planning by
9. In the main INTRO window, the goal is 222. Berlin: Springer-Verlag. Analogical Reasoning. In Case-Based Rea-
shown as the frame ACTOR.3068. The Cox, M. T. 1996a. An Empirical Study of soning Research and Development: Second
actor frame represents the relation between Computational Introspection: Evaluating International Conference on Case-Based Rea-
the action and the agent who did the Introspective Multistrategy Learning in the soning, ed. D. B. Leake and E. Plaza, 531–
action. That is, it is the relation facet of the Meta-AQUA System. In Proceedings of the 540. Berlin: Springer-Verlag.
actor slot whose value facet is the Wumpus. Third International Workshop on Multistrate- Cox, M. T., and Zhang, C. 2005. Planning
The explanation (that is, answer to the gy Learning, ed. R. S. Michalski and J. Wnek, as Mixed-initiative Goal Manipulation. In
question) is a representation of why the 135–146. Menlo Park, CA: AAAI Press/The Proceedings of the Fifteenth International Con-
Wumpus “decided” to perform the scream MIT Press. ference on Automated Planning and Schedul-
event. Cox, M. T. 1996b. Introspective Multistrat- ing, ed. S. Biundo, K. Myers, and K. Rajan,

44 AI MAGAZINE
Articles

282–291. Menlo Park, CA: AAAI Press. Cambridge, MA: The MIT Press/Bradford 1994. Inside Case-Based Explanation. Hills-
Elahi, M., and Cox, M. T. 2003. User’s Man- Books. dale, NJ: Lawrence Erlbaum Associates.
ual for Prodigy/Agent, Ver. 1.0, Technical Muñoz-Avila, H., and Cox, M. T. 2006. Singh, P. 2005. EM-ONE: An Architecture
Report, WSU-CS-03-02, Department of Case-based Plan Adaptation: An Analysis and for Reflective Commonsense Thinking.
Computer Science and Engineering, Wright Review. Unpublished (www.cse.lehigh.edu/ Ph.D. dissertation. Department of Electrical
State University, Dayton, OH. ~munoz/Publications/MunozCox06.pdf). Engineering and Computer Science. Massa-
Fox, S. 1996. Introspective Learning for Murdock, J. W. 2001. Self-Improvement chusetts Institute of Technology, Cam-
Case-Based Planning, Technical Report, through Self-understanding: Model-based bridge, MA.
462. Ph.D. dissertation, Indiana University, Reflection for Agent Adaptation. Ph.D. dis- Tate, A. 1976. Project Planning Using a
Computer Science Dept., Bloomington, IN. sertation, College of Computing, Georgia Hierarchic Non-linear Planner (Tech. Rep.
Fox, S.; and Leake, D. B. 1995. Learning to Institute of Technology, Atlanta, GA. No. 25). Edinburgh, UK: University of Edin-
Refine Indexing by Introspective Reason- Murdock, J. W., and Goel, A. K. 2001. Meta- burgh, Department of Artificial Intelli-
ing. In Proceedings of the First International Case-Based Reasoning: Using Functional gence.
Conference on Case-Based Reasoning, Lecture Models to Adapt Case-based Agents. In van Lent, M.; Fisher, W.; Mancuso, M.
Notes in Computer Science 1010, 430–440. Case-Based Reasoning Research and Develop- 2004. An Explainable Artificial Intelligence
Berlin: Springer Verlag. ment: Proceedings of the 4th International System for Small-Unit Tactical Behavior. In
Ghosh, S.; Hendler, J.; Kambhampati, S.; Conference on Case-Based Reasoning, ICCBR- Proceedings of the Nineteenth National Con-
and Kettler, B. 1992. The UM Nonlin Plan- 2001, ed. D. W. Aha, I. Watson, and Q. ference on Artificial Intelligence, 900–907.
ning System [a Common Lisp Implementa- Yang, 407–421. Berlin: Springer. Menlo Park, CA: AAAI Press.
tion of A. Tate’s Nonlin Planner]. College Myers, K., and Yorke-Smith, N. 2005. A Veloso, M. M. 1994. Planning and Learning
Park, MD: University of Maryland Cognitive Framework for Delegation to an by Analogical Reasoning. Berlin: Springer.
(ftp://cs.umd.edu/pub/nonlin/nonlin- Assistive User Agent. In Mixed-Initiative Veloso, M., and Carbonell, J. G. 1994. Case-
files.tar.Z). Problem-Solving Assistants: Papers from the based Reasoning in PRODIGY. In Machine
Haigh, K. Z.; Kiff, L. M.; and Ho, G. 2006. 2005 AAAI Fall Symposium, AAAI Technical Learning IV: A Multistrategy Approach, ed. R.
The Independent LifeStyle AssistantTM Report FS-05-07. Menlo Park, CA: American S. Michalski and G. Tecuci, 523–548. San
(I.L.S.A.): Lessons Learned. Assistive Tech- Association for Artificial Intelligence. Francisco: Morgan Kaufmann Publishers.
nology 18: 87–106. Pollack, M. E., and Horty, J. F. 1999. There’s Veloso, M.; Carbonell, J. G.; Perez, A.; Bor-
Johnson, W. L. 1994. Agents That Learn to More to Life than Making Plans. AI Maga- rajo, D.; Fink, E.; and Blythe, J. 1995. Inte-
Explain Themselves. In Proceedings of the zine 20(4): 71–83. grating Planning and Learning: The
Twelfth National Conference on Artificial PRODIGY Architecture. Journal of Theoreti-
Ram, A. 1993. Indexing, Elaboration and
Intelligence. Menlo Park, CA: AAAI. cal and Experimental Artificial Intelligence
Refinement: Incremental Learning of
Kolodner, J. L. 1993. Case-Based Reasoning. 7(1): 81–120.
Explanatory Cases. Machine Learning 10:
San Mateo, CA: Morgan Kaufmann Pub- 201-248.
lishers. Ram, A. 1991. A Theory of Questions and Michael T. Cox is a sen-
Leake, D. B. 1992. Evaluating Explanations: Question Asking. The Journal of the Learning ior scientist in the Intel-
A Content Theory. Hillsdale, NJ: Lawrence Sciences 1(3–4): 273–318. ligent Distributing Com-
Erlbaum Associates. puting Department of
Ram, A., and Leake, D., eds. 1995. Goal-Dri-
Lee, P., and Cox, M. T. 2002. Dimensional BBN Technologies, Cam-
ven Learning. Cambridge, MA: MIT
Indexing for Targeted Case-Base Retrieval: bridge, MA. Previous to
Press/Bradford Books.
The SMIRKS System. In Proceedings of the this position, Cox was
Reder, L. M., and Schunn, C. D. 1996. an assistant professor in
15th International Florida Artificial Intelli- Metacognition Does Not Imply Awareness: the Department of Com-
gence Research Society Conference, ed. S.
Strategy Choice Is Governed by Implicit puter Science and Engineering at Wright
Haller and G. Simmons, 62–66. Menlo
Learning and Memory. In Implicit Memory State University, Dayton, Ohio, where he
Park, CA: AAAI Press.
and Metacognition, ed. L. Reder, 45–77. Mah- was the director of Wright State’s Collabo-
McCarthy, J., and Chaudri, V. 2004. DARPA wah, NJ: Lawrence Erlbaum Associates. ration and Cognition Laboratory. His
Workshop on Self Aware Computer Sys- research interests include case-based rea-
Rosenthal, D. 2000. Introspection and Self-
tems, 27–28 April. Arlington, VA: SRI Head- soning, collaborative mixed-initiative plan-
Interpretation, Philosophical Topic (special
quarters. ning, understanding (situation assess-
issue on introspection) 28(2): 201–233.
Minsky, M.; Singh, P.; and Sloman, A. 2004. ment), introspection, and learning. More
Russell, S., and Norvig, P. 2003. Artificial
The St. Thomas Common Sense Sympo- specifically, he is interested in how goals
Intelligence: A Modern Approach, 2nd. ed.
sium: Designing Architectures for Human- interact with and influence these broader
Upper Saddle River, NJ: Prentice Hall.
Level Intelligence. AI Magazine 25(2): 113– cognitive processes.
124. Schank, R. C. 1979. Interestingness: Con-
trolling Inferences. Artificial Intelligence
Moorman, K. 1997. A Functional Theory of
12(3): 273–297.
Creative Reading: Process, Knowledge, and
Evaluation, Ph.D. dissertation, College of Schank, R. C. 1982. Dynamic Memory: A
Computing, Georgia Institute of Technolo- Theory of Reminding and Learning in Com-
gy, Atlanta, GA. puters and People. Cambridge, MA: Cam-
bridge University Press.
Moorman, K., and Ram, A. 1999. Creativi-
ty in Reading: Understanding Novel Con- Schank, R. C. 1986. Explanation Patterns:
cepts. In Understanding Language Under- Understanding Mechanically and Creatively.
standing: Computational Models of Reading, Hillsdale, NJ: Lawrence Erlbaum Associates.
ed. A. Ram and K. Moorman, 359–395. Schank, R. C.; Kass, A.; and Riesbeck, C. K.

SPRING 2007 45
New & Recent Books
from AAAI Press

Twenty-First National Conference on Data Mining: Next Generation


Artificial Intelligence Challenges and Future Directions
Yolanda Gil and Raymond Mooney, Hillol Kargupta, Anupam Joshi,
Program Chairs Krishnamoorthy Sivakumar, and Yelena Yesha
NOW SHIPPING! www.aaai.org/Press/Books/kargupta2.php

International Conference on Automated Planning New Directions in Question Answering


and Scheduling 2006 Mark Maybury
Derek Long and Stephen Smith, www.aaai.org/Press/Books/maybury3.php
Program Chairs
NOW SHIPPING!
Editor-in-Chief
Anthony Cohn, University of Leeds, UK
Nineteenth International FLAIRS Conference
Geoff Sutcliffe and Randy Goebel, AAAI Press Editorial Board
Program Chairs Aaron Bobick,
NOW SHIPPING!
Georgia Institute of Technology, USA
Ken Forbus,
Northwestern University, USA
Tenth International Conference on Enrico Franconi,
Free University of Bozen - Bolzano, Italy
Principles of Knowledge Representation Thomas Hofmann,
and Reasoning Darmstadt University of Technology, Germany
Patrick Doherty and John Mylopoulos, Craig Knoblock,
Program Chairs Information Sciences Institute,
University of Southern California, USA
NOW SHIPPING!
George Luger,
University of New Mexico, USA
David Poole,
Thinking about Android Epistemology University of British Columbia, Canada
Kenneth Ford, Clark Glymour, Oliviero Stock,
and Patrick J. Hayes ITC IRST, Italy
www.aaai.org/Press/Books/ford.php Gerhard Widmer,
Johannes Kepler University, Austria
Mary Anne Williams,
Smart Machines in Education University of Technology, Sydney, Australia
Kenneth Forbus Editor-in-Chief Emeritus
and Paul Feltovich Kenneth Ford, Institute for
www.aaai.org/Press/Books/forbus.php Human and Machine Cognition, USA

46 AI MAGAZINE

You might also like