Professional Documents
Culture Documents
1
2 INTELLIGENT AGENTS
append
to the end of
L OOKUP(
,
)
return
Figure 2.8
Figure 2.10
Figure 2.13
2
3
Figure 2.16
3 SOLVING PROBLEMS BY
SEARCHING
Figure 3.2
Figure 3.9
4
5
Figure 3.12
Figure 3.17
6 Chapter 3. Solving Problems by Searching
for =
0,N 0 to O do
!1J 4 D EPTH -L IMITED -S EARCH(
6 , =4 , )
if !1 LM K cutoff then return !1J
Figure 3.19
Figure 3.25
INFORMED SEARCH AND
4 EXPLORATION
function RBFS(
6 , >= , / $6F0 ) returns a solution, or failure and a new R -cost limit
if G OAL -T EST[
6 ]( 3
) then return >=
!? E XPAND(
= ,
6 )
if !4 is empty then return /
$ !" , O
for each in ! do
R [s] TSVUWYX0Z>XI[]\_^]XI[: R2`
=a0[
repeat
4 the lowest R -value node in !
if R2` acbC/ J$6F$ then return /
$J!1 , R [ ]
J >
$de2 the second-lowest R -value among !
!1J , R [ ] RBFS( 4 6 , , Sgf(h?X/ 06F$ :i
$d[ )
if !1 LM K /
$ !1 then return !"
Figure 4.6
Figure 4.13
7
8 Chapter 4. Informed Search and Exploration
Figure 4.17
repeat
>u "
!1(
empty set
loop for from 1 to S IZE( "
!1
) do
kv R ANDOM -S ELECTION( "4!"(
, F ITNESS -F N)
%w R ANDOM -S ELECTION( "
!1(
, F ITNESS -F N)
,$=Q R EPRODUCE( k , % )
if (small random probability) then ,$(=v M UTATE( ,0(= )
add ,$(= to Yu "
!1(
"
!1
xC>u "
!1(
until some individual is fit enough, or enough time has elapsed
return the best individual in "
!1
, according to F ITNESS -F N
Figure 4.21
9
Figure 4.25
Figure 4.29
CONSTRAINT
5 SATISFACTION
PROBLEMS
Figure 5.4
10
11
while 5!4!
is not empty do
X o : F [4 R EMOVE -F IRST( 5!4!4 )
if R EMOVE -I NCONSISTENT-VALUES( : ) then
for each V in N EIGHBORS[o ] do
add (c : ) to 5!4!4
Figure 5.9
Figure 5.11
6 ADVERSARIAL SEARCH
Figure 6.4
12
13
Figure 6.9
7 LOGICAL AGENTS
Figure 7.2
function TT-C HECK -A LL( ¡¢- , , %6g , 6o= ) returns !4 or /
if E MPTY ?( %6g ) then
if PL-T RUE ?( ¡N- , 6o= ) then return PL-T RUE ?( , 6o= )
else return !4
else do
£
F IRST( %6g ); 4 R EST( %6g )
£
return TT-C HECK -A LL( ¡¢- , , , E XTEND( : !4e:6o= ) and
£
TT-C HECK -A LL( ¡¢- , , , E XTEND( :0/
J:6o= )
Figure 7.12
14
15
Figure 7.15
Figure 7.18
16 Chapter 7. Logical Agents
function DPLL( (
!1 , %6g , 6o= ) returns !
or /
J
if every clause in (
!1 is true in 6o= then return !4
if some clause in
!1 is false in 6o= then return /
J
® d
J!42
, F IND -P URE -S YMBOL( %6gJ ,
!" , 6o= )
® ® £
if is non-null then return DPLL( (
!1 , %6g J – , E XTEND( : d
J!4:6o= )
® d
J!42
(
1
!
o
6
=
, F IND -U NIT-C LAUSE( , )
® ® £
if is non-null then return DPLL( (
!1 , %6g J – , E XTEND( : d
J!4:6o= )
®
% g
6
4
% g
6
J
F IRST( ); R EST( )
®
return DPLL(
!1 , , E XTEND( , !4 , 6o= )) or
®
DPLL(
!1 , , E XTEND( , /
, 6o= ))
Figure 7.21
Figure 7.23
17
Figure 7.26
8 FIRST-ORDER LOGIC
18
9 INFERENCE IN
FIRST-ORDER LOGIC
Figure 9.2
19
20 Chapter 9. Inference in First-Order Logic
Figure 9.5
Figure 9.9
Figure 9.12
21
repeat
clause the lightest member of
move (
!1 from to !1
P ROCESS(I NFER( (
!1 , !"
), )
until = ` a or a refutation has been found
Figure 9.19
10 KNOWLEDGE
REPRESENTATION
22
11 PLANNING
Figure 11.3
23
24 Chapter 11. Planning
Figure 11.5
Figure 11.7
25
Figure 11.11
Ä $ X }
deeX
'i[[
Ë
X }
dXÊ
'[ n
2X
i'i[[
n Ê ¤ Ê
) 2X
X Ê }
i'i[
P RECOND :
dX Ê
'[
}
deX
'ei[ n
2X
'e[
E FFECT: ¥ Ê ¤ Ê )
) 2X -
i'X Ê }
'[
P RECOND : ¥
dX
'[
}
deX
'eiÊ [
E FFECT: Ê
Figure 11.16
function G RAPHPLAN(
6 ) returns solution or failure
+e
4,v I NITIAL -P LANNING -G RAPH(
6 )
+
< G OALS[ 4 6 ]
loop do
if +
J all non-mutex in last level of +
4, then do
!1 E XTRACT-S OLUTION( +e
4, , +
, L ENGTH( +e
, ))
if !" ÐM K /
$ !1 then return J!1
else if N O -S OLUTION -P OSSIBLE( +
4, ) then return /
$ !1
+
4,Ã E XPAND -G RAPH( +
4, ,
6 )
Figure 11.19
26 Chapter 11. Planning
lÒ2ÓÔ
function SAT PLAN(
6 , ) returns solution or failure
inputs:
6 , a planning problem
lÒ2ÓÔ
, an upper limit for plan length
l lÒ2ÓÔ
for = 0 to do
/ , 6o
$1+Ã T RANSLATE -T O -SAT(
6 , l )
(+46g 4 SAT-S OLVER( / )
if
+46g4 is not null then
return E XTRACT-S OLUTION(
(+e46g4 , 6o
$+ )
return /
$J!1
Figure 11.22
PLANNING AND ACTING
12 IN THE REAL WORLD
ÕÖ]×3ØiÙÚwÛ>ÜÝÝ×Ý"ÙÞFßà9áâÚyÛ>ÜÝÝ×Ý"ÙÞãà
áåäNÖ4æ1×ÖBç
ÙègßéÞFßéêë4à9áÂäÃÖ
æ1×Ö<ç
ÙèFãéÞãéì1ë4à
áîíïÛ>ççðJÝÙIñ7ßéÞFßéê1ë4à9áîíÐÛ>ççið(ÝÙIñ_ãéÞ¢ãéòeóàà
ôõ Ü1ð ÙIö õ Ö<ç4ÙÞxßà9áÂö õ ÖBç
ÙÞãàà
÷xø Ø3× õ ÖLÙ ÷xùù äNÖ
æ×Ö<ç4Ùú1éûeéüýàé
äNÖ
æ1×Ö<ç
Ùú1éûeéþ>à2áâÚyÛ>ÜÝÝ×ÝÙûà9áåÿQäNÖ
æ×Ö<çÕÖLÙûàé
äNÖ
æ×Ö<çÕÖLÙûà9áå ö Ü1Ø3× õ ÖLÙþ
àà
P RECOND :
÷xø Ø3× Eõ ÖL Ù ÷xùù íÐÛ>ççiðJÝÙ Véûàé íÐÛ>ççiðJÝ4Ù géûeéþ>à9á«ÚwÛ>ÜÝÝ×ÝÙûàé
FFECT:
Figure 12.2
27
28 Chapter 12. Planning and Acting in the Real World
ÕÖ]×ØÙÚyÛ>ÜÝÝ×Ý"ÙÞFßà9áâÚyÛ>ÜÝÝ×ÝÙÞ¢ãeà
áåäNÖ4æ1×ÖBç
ÙègßéÞFßéêë4à9áÂäÃÖ
æ1×Ö<ç
ÙèFãéÞãéì1ë4à
áîíïÛ>ççðJÝÙIñ7ßéÞFßéê1ë4à9áîíÐÛ>ççið(ÝÙIñ_ãéÞ¢ãéòeóà
áåäNÖ4æ1×ÖBç
õ ×ÝØ3Ý4Ùòà2áTíÐÛ>ççið BØIÜ1Ø3× õ Ö]ÝÙòà9áåÕÖ]Ý ]ç ø Ø õ ÝÙ
àà
ôõ Ü1ð ÙIö õ Ö<ç4ÙÞxßià2áÂö õ Ö<ç4ÙÞãàà
÷xø Ø3× õ ÖwÙ ÷xùù äNÖ
æ×Ö<ç4Ùú1éûeéü àé
äNÖ
æ1×Ö<ç
Ùú1éûeéþ>à2áâÚwÛ>ÜÝÝ×Ý"Ùûà9áåÿQäNÖ
æ×Ö<çÕÖLÙûàé
äNÖ
æ×Ö<çÕÖLÙûà9áå ö Ü1Ø3× õ ÖLÙþ
àé
P RECOND :
äNÖ
æ×Ö<ç
õ ×ÝØ3Ý4Ùòàà
E FFECT:
÷xùù íÐÛ>ççi:ðJÝÙ Véûàé
÷xø Ø3× õ ÖwRÙ ESOURCE
íÐÛ>ççið(ÝÙ Véûeéþ
à9áâÚyÛ>ÜÝÝ×ÝÙûàé
íÐÛ>ççiðJÝ ÃÖQÙûà2á ö Ü1Ø3× õ ÖwÙþ>àé
P RECOND :
E FFECT:
íÐÛ>ççð BØIÜ1Ø3× õ ÖYÝ4Ùòàà
ÕÖY Ý ]ç ø ØiÙûàé
÷xø Ø3× õ ÖwRÙIESOURCE :
äNÖ
æ1×Ö<çÕÖLÙûà9áTíïÛ>ççðJÝ ÃÖLÙûàé
ö Ö<ç4Ùûà2á ö Ü1Ø3× õ ÖLÙòëàé
P RECOND :õ
ÕÖ] Ý ]ç ø Ø õ Ý"Ùòàà
E FFECT:
R ESOURCE :
Figure 12.5
) 2X - Ë 1! % .
=]: P RECOND :Ë ÏïY%
: E FFECT: .
>= ¤P¥ ÏÐY%1[
) 2X .
<: P RECOND : = Ê =$ : E FFECT: ÏÐY% ¤ ÏÐ $+
+1[
} }
) 2X - $(= !": P RECOND : .
>=Y: E FFECT: !1i[
1
!
Ë ® ®
) 2X } 6F0 : P RECOND : .
>=Y: E FFECT: 6F$ [
)
2
X $
- 1
! $
=
1
: E FFECT: Ê
[
® 6F0
) 2X Ê } ! <: P RECOND ® 6F:$ [ ¤Ê 4
:
E FFECT:
1
!
- 1
! 0
¤T¥
® ÏÐ>% ¤ } !1 - !"$ :
) 2X
% - !10(=": P RECOND } !1 :
E FFECT: ¥ Ð
Ï
Y
% ¤ ¤T¥_Ê
[
}
# ® 6Ã"X - !"$(= !1Ë: ® }
(
2X S TEPS e±< 6F$ : ±] $® - !1$=1:
± VÊ
4
!
B
: ±
% - !10(=$ 4 $ ,Y: 3
± i
O RDERINGS e& ±B ± ± Å & ±] :
L INKS e&
! ±B : &
"$ # ! %'& ± :
± ) ( %+*+! ,.- / ± : ± 1 0 # 2! /3*465+/ ± : ± 87 # 9; : !%'< 9 - = / ± :
± 7 Å
# ! >
9 : % $
4
$
Y
, : ± ? @
"
! # +
% & Å $
0
, [[
Figure 12.9
29
function O R -S EARCH(
,
6 , "
, ) returns
o
=$
4
B:L9/
$J!1
if G OAL -T EST[
6 ]( 3
) then return the empty plan
if
is on "
, then return /
$J!1
for each
, 3
in S UCCESSORS[
6 ]( 3
) do
4
A ND -S EARCH( 3
,
6 , ` 3
Á "
,"a )
if
(
ÐM K /
$ !" then return `
Á
(
>a
return /
$J!1
Figure 12.18
30 Chapter 12. Planning and Acting in the Real World
Figure 12.28
)Ä + 4 eXÎ: µ [
$ X ) XÎ: ` . 0/ : -
0>a0[ ¤Ç) X µ : ` * (+, :C a0[ ¤
,$1+YX
$: +, :
0>a0[[ ®
Y1XÎ: [ ®
Y"X : ÎN[
Ë
) X !1Y =>X -
$[ ` * X
+1-4 : :ECx a0¤ [[ µ ¤ µ
* } - ¤É) `¼
) 2X $ X
+14 : -
$[:
P RECOND : ) e
,$+YX -
$: ` ¼ : a0[ ¤É) X
+14 : ` ¼ : 0a [ ¤
®
Y1X
+1 :I"
Y[
¤T¥¦) X0"
>: ` ¼ : a0[
E FFECT: *
1
!
Y
Y
= X -
$
[ [
Ë
) 2X 4X
+4 : ` ¼ : a0[:
P RECOND : ) X
+14 : ` 4:9Ña0[:
E FFECT: ) X
+1 : ` ¼ : a$[ ¤T¥¦) X
+14 : ` 4:9Ñ a$[[
Figure 12.30
13 UNCERTAINTY
Figure 13.2
F
function E NUMERATE -J OINT-A SK( , e, P) returns a distribution over F
F
inputs: , the query variable
e, observed values for variables E
P, a joint distribution on variables iï ¨ E ¨ Y /* Y = hidden variables */
F F
Q X [4 a distribution over , initially empty
for each value ¼ of do F
Q X ¼ [4 E NUMERATE -J OINT(¼ , e, Y, ` a , P)
return N ORMALIZE(Q X [ )
Figure 13.6
31
PROBABILISTIC
14 REASONING
F
function E NUMERATION -A SK( , e, ) returns a distribution over F
F
inputs: , the query variable
e, observed values for variables E
, a Bayes net with variables iÐ ¨ E ¨ Y /* Y = hidden variables */
F F
Q X [4 a distribution over , initially empty
for each value ¼ of do F
extend e with value ¼ for F
Q X ¼ [4 E NUMERATE -A LL(VARS[ ], e)
return N ORMALIZE(Q X [ )
Figure 14.10
F
function E LIMINATION -A SK( , e, ) returns a distribution over F
F
inputs: , the query variable
e, evidence specified as an event
, a Bayesian network specifying joint distribution P X c :: o [
/
3< ` a ; d
B R EVERSE(VARS[ ])
for each d
in d
do
/
B ` M AKE -FACTOR XId
: e [ Á /
e a
if d
is a hidden variable then /
3B S UM -O UT( d
, /
3 )
return N ORMALIZE(P OINTWISE -P RODUCT(/
3 ))
Figure 14.12
32
33
function P RIOR -S AMPLE( ) returns an event sampled from the prior specified by
inputs: , a Bayesian network specifying joint distribution P X c :: o [
Figure 14.15
F C
function R EJECTION -S AMPLING( , e, , ) returns an estimate of
£ X
Á e[
F
inputs: , the query variable
e, evidence specified as an event
, a Bayesian network
C
, the total number of samples to be generated
F
local variables: N, a vector of counts over , initially zero
for ¶ = 1 to do R
x P RIOR -S AMPLE( )
if x is consistent with e then
N[ k ] N[ k ]+1 where k is the value of F in x
return N ORMALIZE(N[ ]) F
Figure 14.17
F C
function L IKELIHOOD -W EIGHTING( , e, , ) returns an estimate of
£ X
Á e[
F
inputs: , the query variable
e, evidence specified as an event
, a Bayesian network
C
, the total number of samples to be generated
local variables: W, a vector of weighted counts over , initially zero F
for ¶ = 1 to do R
x, u W EIGHTED -S AMPLE( )
W ` k"a W ` k"a\7u where k is the value of F in x
return N ORMALIZE(W ` a ) F
function W EIGHTED -S AMPLE( , e) returns an event and a weight
L
x an event with elements; u 1
for = 1 to do L
if has a value ¼ in e
then uCu SO
£ X J LNM
o M ¼
9Á Ì4 p X o
[ [
else ¼ a random sample from P X Á
Ì p KJ LM X [[
return x, u
Figure 14.19
34 Chapter 14. Probabilistic Reasoning
F C £
function MCMC-A SK( , e, , ) returns an estimate of X Á e [
F F
local variables: N ` a , a vector of counts over , initially zero
Z, the nonevidence variables in
x, the current state of the network, initially copied from e
X
fv ` a T
Q
for M z to do M
Q Q
fv ` a F ORWARD X fv ` _za3: ev ` a0[Q
for M Q M
Q O
downto 1 do
Q
sv ` a N ORMALIZE X fv ` a b [
b BACKWARD X b : ev ` a0[ Q
return sv
Figure 15.5
35
36 Chapter 15. Probabilistic Reasoning over Time
Y
function F IXED -L AG -S MOOTHING( p , ,6F6 , = ) returns a distribution over X YZA[
Y
inputs: p , the current evidence for time step
,6F6 , a hidden Markov model with ±
MO
± transition matrix T
= , the length of the lag for smoothing
static: , the current time, initially 1
\Y ^] Y
f, a probability distribution, the forward message P X Á p [ , initially P RIOR ` ,6F6oa
_
B, the -step backward transformation matrix, initially the identity matrix
p YZA[2] Y
, double-ended list of evidence from M _ M
to , initially empty
local variables: O Y A[ Y
: O , diagonal matrices containing the sensor model information
Figure 15.8
C
function PARTICLE -F ILTERING(e, , = ) returns a set of samples for the next time step
C
inputs: e, the new incoming evidence
, the number of samples to be maintained
WC W
= , a DBN with prior P X X [ , transition model P X X eÁ X [ , and sensor model P X E Á X [
static: & , a vector of samples of size , initially generated from P X X [
C W
local variables: `
, a vector of weights of size
for = 1 to do C
W
& [ ] sample from P X X Á X M &y` a0[
` Q
[ ] P X e Á X M ±L` a$[
C
& W EIGHTED -S AMPLE -W ITH -R EPLACEMENT( , & , ` )
return &
Figure 15.18
16 MAKING SIMPLE
DECISIONS
integrate
into #
® Ä 'X b [
Ê X'b [
8
¶ 8 that maximizes a
Ä X'value
® the b ' b
if a Ð
[ b Ê
X [
then return R EQUEST( b )
else return the best action from #
Figure 16.9
37
17 MAKING COMPLEX
DECISIONS
c
function VALUE -I TERATION( 6o= , ) returns a utility function
l
inputs: 6o= , an MDP with states & , transition model , reward function * , discount d
c
, the maximum error allowed in the utility of any state
local variables: Í , Í | , vectors of utilities for states in & , initially zero
e
, the maximum change in the utility of any state in an iteration
Í |;e
repeat
Íï 0
for each state in & do
Ív| ` a *x` a¢\ d SVUW j X3e:
:|$[¢Í ` i |Ja
fhg
Í | aF Í a Á b e then i e Á Í | ` ax Í ` a Á
until
elifk Á c XI`zQ d [º d `
return Í
Figure 17.5
38
39
Figure 17.9
18 LEARNING FROM
OBSERVATIONS
Figure 18.6
40
41
for 6 = 1 to Ï do
h ` 6ae . ( k
6Ã
, w)
8 0
for ¶ = 1 to do C
if h ` 6aX ¼ [ÃM K then 8Eå\ w ` a
C r
for ¶ = 1 to do
r r ts
if h ` 6aX ¼ [ M then w ` a w ` a º"XIzQ{ e[
vuxwyp
w N ORMALIZE(w)
z ` 6a XIzL{[º4
return W EIGHTED -M AJORITY(h, z)
Figure 18.12
Figure 19.3
Figure 19.5
42
43
n
function M INIMAL -C ONSISTENT-D ET( , ) ) returns a set of attributes
n
inputs: , a set of examples
) , a set of attributes, of size
for "
X :V:g do
for each subset )L of ) of size do
n
if C ONSISTENT-D ET ?( ) , ) then return )
n
function C ONSISTENT-D ET ?( ) , ) returns a truth-value
inputs: ) , a set of attributes
n
, a set of examples
}
local variables: , a hash table
n
for each example in do
}
if some example in has the same values as for the attributes )
but a different classification then return /
J
}
store the class of in , indexed by the values for attributes ) of the example
return !4
Figure 19.11
44 Chapter 19. Knowledge in Learning
Figure 19.16
20 STATISTICAL LEARNING
METHODS
repeat
for each in ke
6Ã
do
$ H k a
n D% ~} W · `
`p F
a ¬
Y
+ I
X $
][
· · \ O n O+ | XI$][Ok a
`
until some stopping criterion is satisfied
return N EURAL -N ET-H YPOTHESIS( Y u' )
Figure 20.22
45
46 Chapter 20. Statistical Learning Methods
repeat
for each in ke
6Ã
do
r
for each node in the input layer do
~k ` a
$
for = 2 to Ï do
H · ³
~ +>XI$ [
Q
Ç+ | IX 0 [OÉX ` aFC [
for each node in the output layer do
m
for = Ï z to 1 do
for each node in layer do
m r m
~+ | XI$ [ H
· ³ {
for each node Q in layer \ z do
· ³ · ³ \ OÉ O
m
until some stopping criterion is satisfied
return N EURAL -N ET-H YPOTHESIS( Y u' )
Figure 20.27
REINFORCEMENT
21 LEARNING
m
if T ERMINAL ?[ | ] then ,
w null else ,
LC | , ` | a
return
Figure 21.3
47
48 Chapter 21. Reinforcement Learning
Figure 21.6
return
Figure 21.11
22 COMMUNICATION
Figure 22.3
49
50 Chapter 22. Communication
Q r
for each ` : : ) !
/* For each edge expecting a word of this category here, extend the edge. */
- a in ,"
[¶ ] do
u
= is of category - then
if
Q r
A DD -E DGE( ` : \Fze:8Î µ h! a)
procedure P REDICTOR( ` : ¶ : ) -_ a ) !
/* Add to chart any rules for - that could help extend this edge */
for each X - ! d
[ in R EWRITES -F OR( - , +e
6F6o
) do
A DD -E DGE( ` : : - r r a) !
d
procedure E XTENDER( ` ¶ :'
: - 1a ) ! d
/* See what edges can be extended by this edge */
p the edge that is the input to this procedure
Q r !
for each ` : :<Î
µ | a in ,"
[¶ ] do
if µ M µ | then
A DD -E DGE( ` :
:ïÎ Q ; h! ;p a)
Figure 22.9
23 PROBABILISTIC
LANGUAGE PROCESSING
®
function V ITERBI -S EGMENTATION( k , ) returns best words and their probabilities
inputs: k , a string of characters with spaces removed
®
, a unigram probability distribution over words
L ENGTH( k )
u=< empty vector of length L \{z
4 vector of length L \{z , initially all 0.0
[0] 1.0
/* Fill in the vectors best, words via dynamic programming */
for = 0 to do L
for ¶ = 0 to _z do Q
u=vD k [ : ]
u L ENGTH( u= )
r Q
® O
if [ uy = ] [ - u ] [ ] then
[ ] ® [ u= ] ý [
Q O
] Q
u= [ ] Cu=Q
/* Now recover the sequence of best words */
sequence the empty list
Q L
while b Q X
do
push u Q
= [ ] onto front of 5!4>
Q Q
L ENGTH( u= [ ]) Q
/* Return sequence of best words and overall probability of sequence */
return 5!4> , [ ] Q
Figure 23.2
51
24 PERCEPTION
for each Ì :
Ì :>Ì in T RIPLETS( $6o
+ ) do
U U
for each ¦ : ; : U
in T RIPLETS( 6o= ) do
U
F IND -T RANSFORM( Ì :>Ì :>Ì , ¦ : ; : U U )
if projection according to explains image then
return
return failure
Figure 24.22
52
25 ROBOTICS
C
function M ONTE -C ARLO -L OCALIZATION(
, ° , , 6o= , 6o
) returns a set of samples
inputs:
, the previous robot motion command
z, a range scan with Ï readings ° ::° ;
C
, the number of samples to be maintained
6o= , a probabilistic environment model with pose prior P X X [ ,
£ W ^
W W
motion model P X X Á X : Î [ , and range sensor noise model X Á L[
6o
, a 2D map of the environment
>
C
static: & , a vector of samples of size , initially generated from P X X [ W
local variables: `
, a vector of weights of size C
for = 1 to do C
& [ ] sample from P X X eÁ X M &y` a3:<Î W W M
1[
` [ ] 1
for ¶ =1 to Ï do
°y E XACT-R ANGE(¶ , & [ ], 6o
)
` ` 3a `` as £ X^ M ° ;Á M °[
C
& W EIGHTED -S AMPLE -W ITH -R EPLACEMENT( , & , ` )
return &
Figure 25.8
53
26 PHILOSOPHICAL
FOUNDATIONS
54
27 AI: PRESENT AND
FUTURE
55