You are on page 1of 8

Int. Journal of Game Theory, Vol. 7, Issue 2, page 73-80. Physica-Verlag, Vienna.

Refinements of the Nash Equilibrium Concept

By R.B. Myerson, Evanston I )2)

Abstract. Selten's concept of perfect equilibrium for normal form games is reviewed, and a new
concept of proper equilibrium is defined. It is shown that the proper equilibria form a nonempty
subset of the perfect equilibria, which in turn form a subset of the Nash equilibria. An example
is given to show that these inclusions may be strict.

1. Introduction

The concept of equilibrium, as defined by Nash [ 1951 ], is one of the most impor-
tant and elegant ideas in game theory. Unfortunately, a game can have many Nash
equilibria, and some of these equilibria may be inconsistent with our intuitive notions
about what should be the outcome of a game. To reduce this ambiguity and to elimi-
nate some of these counterintuitive equilibria, Selten [ 1975] introduced the concept
of a perfect equih'brium. In this paper, we shall define the notion of a proper equilib-
rium, to further refine the equilibrium concept. We will show that, for any game, the
proper equilibria form a nonempty subset of Selten's perfect equilibria, which are
themselves a subset of the Nash equilibria.
To see how these counterintuitive equilibria can arise, consider the game in Figure
1.

F~ : Hayer 2

131 /32

Hayer 1
Ot2

Fig. 1

1) The author acknowledges helpful conversations with Ehud Kalai and David Schmeidler.
2) Prof. R.B. Myerson, Graduate School of Management, Northwestern University, Nathaniel
Leverone Hall, Evanston, Illinois 60201, USA.
74 R.B. Myerson

There are two Nash equilibria in this game, (al, 31 ) and (a2, f12), because in each case
neither player can improve his payoff by unilaterally changing his strategy. But it
would be unreasonable to predict (t~2, f12) as the outcome of this game. If player 1
thought that there was any chance of player 2 using fil, then player 1 would certainly
prefer al. Thus (a2, f12) qualifies as an equilibrium only because Nash's definition
presumes that a player will ignore all parts of the payoff matrix corresponding to
opponents strategies which are given zero probability. The essential idea behind
Selten's perfect equilibria is that no strategy should ever be given zero probability,
since there is always a small chance that any strategy might be chosen, if only by
mistake. So, in our example, al and/}~ always must get at least an infinitesimal prob-
ability weight, which will eliminate (a~, ~2) from the class of perfect (and proper)
equilibria.

2. Normal Form Games and Nash Equilibria

Although Selten [ 1975] initially developed his theory of perfect equilibria for ex-
tensive from games, we will limit our attention in this paper to normal form games.
P is an n-person game in normal form if

F = (S 1 . . . . . S n; U 1 , . . . , Un) (1)
where each S i is a nonempty finite set, and each Ui is a real-valued function defined
on the domain $1 X $2 X . . . X S n . We interpret (1, 2 . . . . . n} as the set of players
in the game. For each player i, S i is the set of pure strategies which are available to
player i. Each Ui is the utility function for player i, so that Ui (sl . . . . , Sn) would be
the payoff to player i (measured in some yon Neumann-Morgenstern utility scale) if
(s l , . . . , sn) were the combination of strategies chosen by the players.
For any finite set M, let A (M) be the set of all probability distributions over M.
Thus:

A ( S i ) = {o i E R Si I ~ o i ( s i ) = l, o i(si)>lO Vs~ESi). (2)


s.ES.
t !

So A (Si) is the set of randomized or mixed strategies which player i could choose
in P.
It is straightforward to extend the utility functions to the mixed strategies, using
the formula:
n
Z • ...x s (i_I'l10 i (Si)) ~ (Sl . . . . . Sn).
(ol . . . . . On) = (s, ..... sn)~s, (3)

That is, U/(ol . . . . . On) is the expected utility which player/would get if each player
i planned to independently randomize his strategy according to o i.
n
Suppose that (ol, 9 9 a n) E z• A (Si) is the combination of mixed strategies
Ref'mements of the Nash Equilibrium Concept 75

which the players are expected to use, but suppose that player ] is considering whether
to switch to one of his pure strategies s7 ES/. Let l~ (s/* I 01, 9 9 9 on) be the expected
utility f o r / i f he makes this switch and all others remain with their o i mixed strategies,
so that:

g] (S; ] O, . . . . . On)=U](Ol .... , Oj.l, O;, Oj+ 1 ..... on ) (4)

1 if sj=s*.
where oI (si)= !
0 if si :/: s/*.

We say that sl is a best response (in pure strategies) to 0 1 . . . . . On) for player] iff
t
V]. (s] [ o l .... ,On)----- max V/(sila, ..... On).
s'.~S.
I I

A combination of mixed strategies (ol, 9 9 9 e n) is a Nash equilibrium if no player


can gain by unilaterally switching to any other mixed strategy. That is,
n

(01 . . . . . On) E iX=l A (S i) is a Nash equilibrium iff

!
ui(o, ..... o,,)>~uj(o, .... ,~,} ..... o) w, voj ~,~(s?. (5)

It is well known that a combination of mixed strategies forms a Nash equilibrium if


and only if every player gives positive probability only to his pure strategies which are
best responses for him. The following proposition states this fact in terms which will
be most convenient for our purposes.

Proposition 1:(01 . . . . . On) is a Nash equilibrium iff:


(oi . . . . . On) E A ($1) • X A (Sn),and if V/(sj ] ol . . . . , On) ( If] (s) [ ol,... ,On)
theno](sj)=O, Vj, Vs]ES], VS~E~.

To check this proposition, observe that

O1 ~ 9 . . ~O n )-- O1, 9 . . ~ , . . . , On

=. ',%
z ') %
z o~(s? o/'(s}) (~ (~j I o,, .... %) - ~. (sj'1 ol, .... o,)).

The proposition then follows easily from the definition of a Nash equilibrium.

3. Perfect Equilibria

In this section we review Selten's concept of perfect equilbrium, using a new


approach which will be more convenient for our purposes.
76 R.B. Myerson

For any finite set M, let A~ (M) be the set of all probability distributions on M
which give positive probability weight to all members of M. So, for any player i,

A~ (Si)= ( o i E R S i [ ~ oi(si)= 1,oi(s~)>O Vs i'Esi}. (6)


~eS i

The members of A~ (Si) are called totally mixed strategies for player i.
Heuristically, a perfect equilibrium could be described as a combination of totally
mixed strategies (ol, 9 a n) such that, for any player/" and any pure strategy
st ES/, if s] is not a best response to (ol . . . . . On) for] then o] (s~) should be infinitesi-
/ . . . ! . .

many small. Thus, in a perfect equilibrium a player cannot ignore any point m
$1 X . . . X S n as "impossible" when he computes his best responses, since every one
of his opponents' pure strategies has a positive probability; and yet no player wants
to make any substantial change in his strategy, since each player is already putting
almost all his probability weight on his best responses.
To make these ideas about "infinitesimally small" probabilites precise, let e be any
small positive number. Then we define an e-perfect equilibrium to be any combination
of totally mixed strategies (ox . . . . . an) E A~ ($1) X . . . X A~ (Sn) such that:

if V/ (sl [ a,, . . . , an) < I~ (s~ [ o, . . . . . On) then o] (s]) <~e,


v], vs. ~S., Vs'. e S . (7)
I I ! I"
So an e-perfect equilibrium is a combination of mixed strategies such that every pure
strategies gets a positive probability, but only best-response strategies get more than
e probability.
A perfect equilibrium is then defined to be any limit of e-perfect equilibria. That is,
(ol . . . . . On) ~ A (Sx) • 9 .. • A (S,)is aperfect equilibrium iff there exist some
sequences {ek )k-- 1 and ( ( ~ . . . . . ~ 1 such that:

each e k :> 0 and lira e k = O, (8a)


k---~ ~

each (o/~1. . . . . o~n) is an ek-perfect equilibrium, and (8b)

lim q (si) = o i (si), for all i and all s i E S i . (8c)

(Notice that every q will have to be in A~ (Si), but o i need not be, since the closure
of A~ (Si) is A (Si).)
Selten has shown that a perfect equilibrium must be a Nash equilibrium [see ].,emma
9 in Selten]. The proof follows easily from the fact that ~ (si I ol . . . . . On) is contin-
uous in (oa . . . . . On). So a perfect equilibrium must satisfy the conditions of our
Proposition 1, because it is the limit of e-perfect equilbria which satisfy (7).
All perfect equilibria are Nash equilibria, but the converse does not hold. For
example, consider the game Fl defined in Figure 1. The only e-perfect equilibria are
those pairs of totally mixed strategies in which ol (~t2) ~<e and o2 (~32)< e. Thus the
Refinements of the Nash Equilibrium Concept 77

only perfect equilibrium is (0/1,/31 ) - or, more precisely, it is (o~', o~'), where
o~ (cq) = 1 and o~ (/31) = 1. (0/2,/32) was a Nash equilibrium, but it is not a perfect
equilibrium.

4. Proper Equilibria

Consider now the following simple example:

r2 Hayer 2
(u1, v2)
~2 /33
0/1 1, 1 O, 0 -9,-9
Player 1 0/2 O, 0 O, 0 -7,-7

0/3 -9,-9 -7,-7 -7, -7

Fig. 2

As in our first example, (0/1, fll ) would seem like the obvious outcome for this game.
There are three Nash equilibria, and all are in pure strategies (or, more precisely, in
mixed strategies which assign all weight to one pair of pure strategies); these equilibria
are (0/1, fll ), (0/2, f12), and (0/3, f13).
Of these three Nash equilibria, (0/3,/33) is not perfect, but (0/1, fll ) and (0/2, f12) are
both perfect equilibria. To check that (0/2, f12) is a perfect equilibrium, define e~ and
o~ by

o~ (0/1) = e, o~ (0/2) = 1 -- 2e, e~ (0/3) = e

o~ (t3~) = e, o~ 032) = 1 - 2e, o~ (t~3) = e,

and observe that (o~, o~) forms an e-perfect equilibrium. (For example
V,(0/11 ol,e 02)
e _ - -- 8e, Vl (0/2 I o~, a~) -- 7e and VI (0/a I e~, 05) = -- 7 - 2e, so
=

0/2 is best response. As required, o~ (0/t) ~< e and o~ (0/3) ~< e.) Then, as e ~ 0, these o~
and o~ converge to the strategies which select 0/2 and/32 with probability 1. In effect,
adding the 0/3 row and/33 column has converted (0/2,/32) into a perfect equilibrium
even though 0/3 and/33 are obviously dominated strategies.
To discriminate between (0/1,31) and (0/2,32) in this example, we need a new
refinement of the equilibrium concept: the proper equilibrium.
For our general normal form game, we define an e-proper equilibrium to be any
combination of totally mixed strategies (el . . . . . On) E A ~ (S~) • . . . • A ~ (Sn) such
that:
78 R.B. Myerson

if ~ ( s ] [ ol . . . . . On)<I~(s}lol,...,On)theno/(sj)<e'o/(s}),

vj, v,j sj, vs esj. (9)


So an e-proper equilibrium is a combination of totally mixed strategies in which every
player is giving his better responses much more probability weight than his worse
responses (by a factor 1/e), whether or not those "better" responses are "best".
It is easy to check that an e-proper equilibrium must be e-perfect (since
e " ~ (s)) ~< e in (9)), but the converse does not hold. In the example above, (o~, o~)
is not e-proper because, for 0 < e < 1, V1 (a3 l o~, o~) < V1 (al I o~, o~) but
a~ ( a 3 ) > e 9 o~ (~,~).
We now define a proper equilibrium to be any limit of e-proper equilbria. That is
(ol . . . . . On) E A ($1) • 9 9 9 X A .(Sn) is a proper equilibrium iff there exist some se-
quences {e/c}k~ 0 and { ( ~ . . . . . ~))ff=0 such that:

each e k > 0 and lim e k = 0 (10a)


k ' - + 00

each ( o k , . . . , Onk) is an ek-proper equilibrium, and (10b)

lim ~ (si) = o i (si), for all i and all s i E S i. (10c)


k --~o*

As with perfe:t equilibria, a proper equilibrium need not be totally mixed; it must
only be the limit of totally mixed e-proper equilibria.

Proposition 2: For any game in normal form, the proper equilibria form a subset of
the perfect equilbria, which in turn form a subset of the Nash equilibria. These in-
clusions may both be strict inclusions.

Proof: We already remarked that a perfect equilibrium must be a Nash equilibrium,


and that an e-proper equilibrum must be an e-perfect equilibrium. So a proper equili-
brium, as'a limit of e-proper equilibria, is also a limit of e-perfect equilibria, and is
therefore perfect.
The example in Figure 2 shows that these inclusions may be strict, since this game
has three Nash equilibria ((al, ~31), (<x2, ~2), and (e~3, ~33), but only two perfect equili-
bria ((~1,31) and ( ~ , 32)), and only one proper equilibrium, (ax, ~1 ). To verify that
( a l , ~3t) is the only proper equilibrium, suppose 0 < e < 1, and let ( a l , a2) be an
e-proper equilibrium. Since as dominates a3, and 02 is totally mixed, we have
II1 (a3 [ Ol, 0s) < V1 (as [ a l , as), and so ol (a3) ~< e 9 ol (a2). This implies that
II2 ff33 J o l , o 2 ) < V2 (~1 I a l , o 2 ) , s o c r s (33)~< e 9 a2 (/31).This in turnimp lies that
111 (a2 I o , , a 2 ) < Vl (~1 I o l , o s ) , s o 01 ( a s ) < e " ol (al). So 01 (as)--.< e" ol (al)
~< e and ol (a3)---< e. ol (as) ~< e s 9 A similar argument shows that o2 (32) < e and
os (33) ~< e 2 9 Since the probabilities must sum to 1, ol ( a l ) ~> 1 - e - e s and
o2 (31) I> 1 -- e - e s . As e ~ 0, our e-proper equilibria must converge to the mixed
strategies which select al and/31 with probability 1. Thus, although (a2,32) is
perfect for this game, it is not proper.
Refinements of the Nash Equilibrium Concept 79

5. Existence of Proper Equilibria

To be useful, a refinement of the Nash equilibrium concept should generate a non-


empty set of Nash equilibria for any game. We have already shown that our proper
eq~:.aibria do form a subset fo the Nash equilibria. The following theorem assures us
that the set of proper equilibria is also nonempty.

Theorem: For any normal form game P (as in (1)), there exists at least one proper
equilibrium.

Proof: We show first that there exists an e-proper equilibrium, for any e, 0 < e < 1.
Let m = max I S i I. Given e, let ~ = 1 . em" For any player], let
i m
(sp = (S? l (s? 8, V sj Esi .
Observe that A* (Sj) is a nonempty compact subset of A ~ (S]). We now define a point-
n
to-set map t7/:i=• A* (Si) =>A* (Sj) by:

if V/(sj [o, . . . . , On)< ~(s}lo,,...,On) I


then oj* (s]) < e -
vsj s,., As}ESj.
o7 (s}),
/
For any (ol . . . . . an) the points in F: (ol . . . . . an) satisfy a finite collection of linear
inequalities, so F~ (el . . . . . o . ) is a closed convex set. To check that F~ (al . . . . . a_)
is nonempty, let ~ (s,) be the'number of pure strategies s~ E Si such tha~t ,"
V~ (si l al . . . . . On) '< V~(s} l al . . . . ,On); then letting ~; (sfi = eP(si)/( E eP(si ))
s;~S .
will give us o.*/EF/ ( o l , . 9 ., an). (Observe that o; (s/) I> em/m, so ~* ~ A J ( S / ) . )
Finally, continuity of each V/(sj I ") function implies that F / ( . ) must be upper-
semicontinuous.
n n n
L e t F ( - ) = X F; (-). T h e n F : • A* (Si) ~ • A* (Si) satisfies all the conditions
]=1 s i=1 i=1
of the Kakutani Fixed Point Theorem [Kakutani], so there exists some
n
(o~ . . . . . oh) E • A* (Si) such that (~ . . . . . On) EF(Oel . . . . . On). This (o~ . . . . On)
i=1
is clearly an e-proper equilibrium.
So for any 0 < e < 1 there exists an e-proper equilibrium (o e . . . . . o~). Since
n
X A (Si) is a compact set there must exist a convergent subsequence and a proper
i=1
equilibrium ( e l , . . 9 o n) = lim ( o ~ , . . . , an).
e-+0
80 R.B. Myerson

References

Kakutani, S. : A Generalization of Brouwer's Fixed Point Theorem. Duke Mathematical Journal 8,


1941, 457-459.
Nash, J.F. : Non-Cooperative Games. Annals of Mathematics 54, 1951,286- 295.
Selten, R. : Reexamination of the Perfectness Concept for Equilibrium Points in Extensive
Games. International Journal of Game Theory 4, 1975, 25-55.

Received September, 1977

You might also like