You are on page 1of 24

THE ACCOUNTING REVIEW American Accounting Association

Vol. 93, No. 4 DOI: 10.2308/accr-51955


July 2018
pp. 127–149

Leveling the Playing Field: The Selection and Motivation


Effects of Tournament Prize Spread Information
Eddy Cardinaels
KU Leuven and Tilburg University

Clara Xiaoling Chen


University of Illinois Urbana–Champaign

Huaxiang Yin
Nanyang Technological University
ABSTRACT: Many companies administer wage policies based on tournaments or have different salaries attached to
various promotion-based ranks within the company. Employees, however, do not always receive information about
pay-level differences at higher ranks prior to joining the company. While some companies openly disclose prize
spread information across these ranks, others keep such information secret. In this paper, we experimentally
investigate whether the availability of tournament prize spread information enhances employee effort through both a
selection effect and a motivation effect. We predict and find that when employees can select into tournaments of
varying prize spreads (which proxies for an environment where prize spread information is available), high-ability
employees are more likely than low-ability employees to select into the tournament with a larger prize spread. Thus,
the availability of prize spread information produces a separation of employees based on ability. We also find that
employees exert more effort when they can select into a tournament than when they are randomly assigned to one
(which proxies for an environment where prize spread information is absent). We show that this result is driven by
greater homogeneity in the ability of tournament contestants when the availability of tournament prize spread
information provides self-selection opportunity.
JEL Classifications: C91; D83; M40.
Data Availability: Experimental data are available from the authors on request.
Keywords: tournament; prize spread information; self-selection; motivation; effort.

I. INTRODUCTION

M
any U.S. corporations, including firms such as Microsoft Corporation, General Electric, American International
Group, Inc. (AIG), Metropolitan Life Insurance Company (MetLife), American Express Company, Hewlett-
Packard Company (HP), GlaxoSmithKline plc, and many audit firms, use explicit tournament-based incentives in
which pay raises or bonuses are attached to a performance rank (Berger, Klassen, Libby, and Webb 2013; Cohan 2012; Kwoh
2012; Newman and Tafkov 2014; Brazel, Leiby, and Schaefer 2017). For example, LendingTree used relative performance
assessments to categorize employees into three groups: the top 15 percent are rewarded with higher bonuses and pay raises, the
middle 70 percent receive smaller bonuses and pay raises, and the bottom 15 percent receive little to no bonuses or pay raises
(Kwoh 2012). Even for firms that do not use explicit tournament-based incentives for pay raises or bonuses, promotion

We thank Kathryn Kadous (editor) and two anonymous reviewers for their helpful suggestions. We also thank Eric Chan, Katlijn Haesebrouck, Steven
Kachelmeier, Stephan Kramer, Ranjani Krishnan, Terence Ng, Ivo Tafkov, Hun-Tong Tan, Mathijs van Peteghem, Brian White, Laura Wang, Jeffrey
Williams, and Michael Williamson for their valuable comments. We also thank brownbag participants at Emory University Experimental Brownbag and
the University of Illinois at Urbana–Champaign; workshop participants at KU Leuven, Jinan University (Guangzhou), Nanyang Technological University,
and The University of Texas at Austin; and participants at the 2016 Management Accounting Section Midyear Meeting and the 2016 European
Accounting Association Annual Congress for their very helpful comments.
Editor’s note: Accepted by Kathryn Kadous, under the Senior Editorship of Mark L. DeFond.
Submitted: November 2016
Accepted: October 2017
Published Online: October 2017
127
128 Cardinaels, Chen, and Yin

decisions represent an implicit tournament-based incentive (Baker, Gibbs, and Holmstrom 1994a, 1994b; Prendergast 1999).
Tournament Prize Spread, which is the difference between rewards to employees in top performance categories (or employees
at higher ranks) and rewards to employees in low performance categories (or employees at lower ranks) under such tournament-
based incentives, varies across organizations. Some companies maintain a large prize spread across performance categories or
ranks, while others divide the prizes more equally (Towers Perrin 2003).
In many firms, employees do not receive information about tournament prize spread, and job candidates do not receive
information about pay differentials for the various ranks prior to joining the company. Many companies still advocate pay
secrecy and remain silent about pay levels across ranks for fear that large pay differences may undermine employee morale
(Card, Mas, Moretti, and Saez 2012). A growing number of practitioners, including executives and human resource
professionals, however, advocate full transparency in pay practices, or at least the disclosure of wage policies (Gascoigne
2013). Salary information or salary scales across ranks are increasingly publicly disclosed, either internally or even externally,
such as in U.S. public universities. For example, an IT firm, Buffer, discloses the salary formula openly inside the firm
(Gascoigne 2013). Many states, such as California and Georgia, have implemented a ‘‘Right-to-Know’’ law that permits
searching for the salary of any state employee (Card et al. 2012). We provide an important rationale for making prize spread
information available to job candidates by examining both the self-selection and motivation effects of the availability of
tournament prize spread information.
First, we argue that tournament prize spread information, if available to potential employees, plays an important role in
their selection of organizations (Rynes, Gerhart, and Minette 2004, 393). Drawing on behavioral game theory (Stahl and
Wilson 1994; Camerer 2003), which assumes that individuals do not fully incorporate others’ strategy into their decisions, we
predict that the availability of the prize spread information leads to sorting of employees based on their ability. That is, high-
ability candidates are more likely to select into organizations with large tournament prize spreads, whereas low-ability
candidates are more likely to select into organizations with small tournament prize spreads. Second, we predict that such
separation of employees generates a homogeneous pool of tournament contestants, which, in turn, increases employees’
motivation to expend effort. We further argue that this motivation effect due to self-selection is stronger in tournaments with
larger prize spreads.
We use an experiment in which participants, endowed with high or low ability, either select between a large prize spread
tournament and a small prize spread tournament (Self-Selection treatment) or are randomly assigned to one of these two types
of tournament schemes (Random Assignment treatment). The self-selection treatment proxies for an environment in which
prize spread information is available to potential employees, while the random assignment treatment proxies for an
environment in which prize spread information is unavailable to potential employees. These two types of tournaments have the
same expected prize amount, but the large prize spread tournament has a larger winner prize and, thus, a smaller loser prize
compared with the small prize spread tournament. Participants compete with other contestants who enter into the same type of
tournament by choosing effort levels. Performance is a function of the participants’ assigned ability level, their chosen effort,
and random noise.
We find that participants with high ability in the Self-Selection treatment are more likely to enter the large-spread
tournament than those with low ability. Our mediation test shows that this sorting effect arises because participants with high
ability are more likely than their low-ability counterparts to perceive a high chance of winning tournaments. This is consistent
with behavioral game theory suggesting that individuals do not fully consider the tournament choices of others. Specifically,
high-ability employees are attracted by the higher winner prize in the large-spread tournaments even though choosing the
small-spread tournaments would yield higher payoffs. Consistent with the predicted motivation effect, we further find that the
average effort level is higher in the Self-Selection treatment than in the Random Assignment treatment in the large-spread
tournament. Furthermore, we show that, consistent with our theory, the higher effort in the Self-Selection treatment (relative to
the Random Assignment treatment) is driven by greater homogeneity in contestants’ ability, which mitigates both the
complacency effect and giving-up effect commonly found in tournaments. Moreover, results from two supplemental
experiments show that the sorting effect and the underlying causal theoretical mechanisms are robust to alternative design
choices.
Our study makes the following contributions. First, prior accounting literature on the selection effect of incentive contracts
(e.g., Chow 1983; Waller 1985; Waller and Chow 1985; Kachelmeier and Williamson 2010; Hales, Wang, and Williamson
2015) has mainly focused on the selection between various schemes (e.g., fixed pay versus performance-based pay) in which an
individual’s payoff is not affected by other people’s behavior (e.g., Chow 1983). Our study complements these studies by
examining selection in a setting where an individual’s payoff is affected by other people’s behavior. We document that in these
more complex settings, people do not fully consider others’ decisions. Given that high-ability employees are attracted by the
higher winner prize in the large-spread tournaments, prize spread information availability can help firms sort employees based
on ability. To the extent that tournament prize spread information improves firm performance through both the self-selection
effect and the motivation effect, companies may find it beneficial to disclose tournament prize spread information, such as inter-

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 129

rank wage differences, to job candidates or to employees instead of maintaining pay secrecy among different performance
ranks.1
Second, from a theoretical perspective, this study yields novel insights into the relation between the self-selection effect
and the motivation effect of incentive contracts. Prior studies often assume these two effects to be independent of each other.
For example, the literature on selection effect assumes that the presence of self-selection does not influence employee
motivation (Prendergast 1999; Leuven, Oosterbeek, Sonnemans, and Van der Klaauw 2011). The literature on motivation
effect does not consider self-selection and finds mixed evidence about the effort effects of tournament prize spreads (e.g.,
Audas, Barmby, and Treble 2004; Fershtman and Gneezy 2011; Carr 2011). Our study sheds light on these mixed findings by
suggesting that these two effects in tournaments are interrelated. Specifically, the motivation effect of tournament prize spread
is contingent on the information environment; that is, whether job candidates have the information about prize spread prior to
making their selection and can select based on prize spread information.
Finally, our study contributes to a large stream of accounting research examining the efficacy of tournament incentive
schemes (e.g., Berger et al. 2013; Hannan, Krishnan, and Newman 2008; Tafkov 2013; Newman and Tafkov 2014; Kelly,
Presslee, and Webb 2017; Knauer, Sommer, and Wohrmann 2017). Prior studies have documented two serious drawbacks of
using tournaments: the complacency effect and giving-up effect (e.g., Libby and Lipe 1992; Berger et al. 2013). Our findings
suggest that self-selection into tournaments of varying prize spreads leads to greater homogeneity in contestant ability, which
mitigates the complacency and giving-up effects. By identifying a potential mechanism to level the playing field in
tournaments, our study suggests a promising way to reap the benefits of tournaments while avoiding potential negative
consequences, such as perception of an unfair contest among people of varying ability (Nalebuff and Stiglitz 1983).

II. THEORY AND HYPOTHESES

Prior Literature
The prior literature on the selection effect of incentive contracts has mainly focused on the selection between various
incentive schemes in which others’ selection choice does not affect an individual’s payoff, such as the selection between fixed
pay and performance-based pay. For example, Chow (1983) shows that high-ability participants are more likely than low-
ability participants to select into a budget-based incentive contract instead of a fixed-pay contract. Dohmen and Falk (2011) find
that high-ability participants, relative to low-ability participants, are more likely to enter a tournament competition rather than a
fixed-wage or a piece-rate incentive contract.
Recently, a few studies examine the selection among different types of tournaments (e.g., Cason, Masters, and Sheremeta
2010; Leuven et al. 2011; Damiano, Hao, and Suen 2012; Azmat and Moller 2018; Morgan, Sisak, and Vardy 2016).2 Based
on the assumption of full strategic thinking, these studies demonstrate analytically that tournament prize spread information
may not always sort individuals on ability because individuals are able to anticipate stronger competition in the tournaments
with larger prize spreads.3 These prior studies examine the impact of factors other than the availability of prize spread
information on selection. For example, Azmat and Moller (2018) study the effect of talent distribution on selection and show
that in a sports setting, faster runners avoid selecting tournaments with larger prize spreads when they anticipate strong
contestants. Morgan et al. (2016) examine the effects of number of prizes on selection, and Damiano et al. (2012) study the
effect of the incumbents’ talent level in the tournament on selection.
In contrast to these studies, we focus on the impact of the availability of prize spread information on the selection of
employees who differ in ability. This factor is important in a business setting given that companies have different policies
regarding their disclosure of prize spread information. Furthermore, in contrast to prior analytical work, we argue that sorting
can arise in business settings if we relax the assumption of full strategic thinking. Based on a large body of behavioral game

1
Our results further suggest that when prize spread information across companies is made available to potential employees, firms can increase the
tournament prize spread to enhance the motivation effect. However, prior research shows that large tournament prize spread in tournaments can have
undesirable consequences, such as increase in sabotage and decrease in cooperation among employees. Therefore, companies should consider both the
costs and benefits of increasing the prize spread in tournaments.
2
Leuven et al. (2011) also examine the selection among tournaments. However, they examine two tournaments that differ with respect to total prize
amount. They show that the tournament with the larger total prize attracts students with stronger academic skills.
3
Cason et al. (2010) indirectly compare selection between tournaments. Participants are asked to either select between a winner-take-all tournament and
a piece-rate incentive scheme or select between a proportional payment tournament in which the same prize is divided among contestants by their share
of total achievement and a piece-rate incentive scheme. They find that the first selection scenario (selection between a winner-take-all tournament and
piece-rate) elicits more entry among low-ability contestants, but does not discourage entry of high-ability contestants, as compared to the second
selection scenario. Different from Cason et al. (2010), we ask participants to directly select between two tournaments, in which the strategic
consideration becomes important. Furthermore, participants in our study are only informed of the distribution of ability across participants before
selection, while participants in Cason et al. (2010) know the ability of other participants before selecting between tournaments and a piece-rate scheme.

The Accounting Review


Volume 93, Number 4, 2018
130 Cardinaels, Chen, and Yin

theory suggesting limits to strategic thinking (Stahl and Wilson 1994; Camerer, Ho, and Chong 2004; Crawford, Costa-Gomes,
and Iriberri 2013), we predict that the provision of prize spread information prior to selection can lead to sorting on employee
ability. In addition, we argue that such sorting, in turn, can create positive effects for employee motivation in the tournament
competition.
Another stream of literature focuses on the motivational effect of tournament prize spread. This line of literature produces
mixed evidence. For example, using the proprietary data from an organization, Audas et al. (2004) find that larger inter-rank
remuneration spreads motivate employees to work harder. However, Carr’s (2011) survey research suggests that larger wage
inequality does not increase the employees’ work hours. In a field experiment, Fershtman and Gneezy (2011) find that larger
prize spreads motivate some contestants to worker harder, but they also cause a substantial percentage of the contestants to give
up. In a lab experiment, Brown, Evans, Moser, and Presslee (2016) find that larger pay dispersions reduce employees’
motivation to work hard by destroying their perception of pay fairness. However, none of these studies consider situations in
which potential employees can select into tournaments based on prize spread. We argue that this neglect of the selection effect
may be a potential reason for the mixed evidence.
In sum, we extend prior literature by examining both the selection effect and the motivation effect of the availability of
prize spread information. We next present the research setting and develop hypotheses.

Research Setting
To examine the effects of self-selection due to prize spread information, we extend the experimental framework of Hannan,
Towry, and Zhang (2013) by introducing three elements: (1) two types of tournaments with the same total prize: one with a
large prize spread and the other with a small prize spread; (2) employees either select into or are randomly assigned to one of
the two types of tournaments; and (3) employees are endowed with either a high- or low-ability parameter, with equal
probability. Compared to low-ability employees, high-ability employees realize higher return for effort.
Participants who enter the same type of tournament are randomly matched into pairs. Participants then compete with the
other member of their pair by choosing their level of effort. Performance is a function of the assigned ability parameter, the
chosen effort level, and random noise. The person in the pair who achieves the higher performance receives a winner prize in
the respective tournament, while the other person receives a loser prize. We assess motivation by examining the effort levels of
contestants.

Self-Selection Effect
To develop our key prediction that making tournament prize spread information available prior to selection helps to sort
individuals based on ability, we rely on behavioral game theory. This theory suggests that individuals often fail to fully
incorporate others’ strategy into their decision making (Stahl and Wilson 1994; Camerer et al. 2004). Experimental studies in
economics demonstrate that the majority of participants do not fully incorporate others’ strategy into their decisions (see
Crawford et al. [2013] for a review). In experimental financial markets, Hales (2009) finds that investors over-trade because
they fail to incorporate the information from actions of other traders into their decisions. Prior literature provides empirical
evidence of limited strategic thinking among individuals, including moviegoers, newspaper readers, professional sellers, and
even CEOs (Bosch-Domenech, Montalvo, Nagel, and Satorra 2002; Simonsohn 2010; Goldfarb and Xiao 2011; Brown,
Camerer, and Lovallo 2012, 2013). Prior research also suggests that limited strategic thinking may persist in repeated decisions
(Ball, Bazerman, and Carroll 1991).
When individuals only partially incorporate others’ strategy, they often rely too heavily on the ‘‘unconditional’’ own
payoff, which refers to their own payoffs under the assumption that many people randomly choose an action. This
phenomenon, labeled the own payoff effect, occurs because individuals anchor on their own payoffs as an initial step and do not
fully adjust for strategic thinking. Studies from experimental economics find evidence in support of the own payoff effect (Ochs
1995; Goeree and Holt 2001; Goeree, Holt, and Palfrey 2003). In an auditing setting, Bowlin, Hales, and Kachelmeier (2009)
find that participants acting as firm managers engage in aggressive reporting more when the payoff from aggressive reporting
and, thus, the unconditional own payoff increases. However, participants fail to realize that an increase in the payoff from
aggressive reporting will also increase auditors’ scrutiny of the reporting, and in the end, there will be no benefits from
increasing aggressive reporting.
If employees fail to fully consider the choices of others when prize spread information is available prior to selection, then
they may over-rely on their unconditional own payoff; that is, the payoff based on the assumption that many employees would
randomly choose a tournament. In assuming random tournament choice, low-ability employees may perceive a low chance of
winning in both large-spread tournaments and small-spread tournaments. Therefore, the unconditional own payoff for low-
ability employees is more likely to be the loser prize than the winner prize, so low-ability employees will pay more attention to
the loser prize. Since the loser prize is higher in the small-spread tournament than in the large-spread tournament, low-ability

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 131

employees likely will choose the small-spread tournament. In contrast, high-ability employees may choose the large-spread
tournament. By assuming random tournament choices, high-ability employees tend to perceive that they have a high chance of
winning in both types of tournaments. Therefore, the unconditional own payoff for high-ability employees is more likely to be
the winner prize than the loser prize, so high-ability employees will pay more attention to the winner prize and, therefore, likely
will choose the large-spread tournament for its relatively higher winner prize. In sum, these arguments suggest that high-ability
employees are more likely to select into the large-spread tournament compared to low-ability employees.4 Thus, in H1, we
predict that making the prize spread information available to employees facilitates employee sorting based on their ability:
H1: When employees have the opportunity to select into tournaments of varying prize spreads, high-ability employees are
more likely than low-ability employees to select into the large-spread tournament.
There is considerable tension with regard to this hypothesis. Game theoretical models often assume that individuals have
full strategic thinking capability (Myerson 1999; Cason et al. 2010). Appendix A summarizes a game theoretical proof
suggesting that the sorting effect of prize spread information availability does not arise if employees engage in full strategic
thinking and incorporate others’ strategies into their decisions. Specifically, because the probability of low-ability employees
winning a tournament is low in general and the loser prize is higher in the small-spread tournament than in the large-spread
tournament, low-ability employees tend to choose the small-spread tournament. As a result, contestants in the large-spread
tournament are likely to be those of high ability. Considering the choice of low-ability employees, high-ability employees
should anticipate a competition with other high-ability peers if they choose the large-spread tournament. To compete with high-
ability peers in this tournament, employees have to put in high effort, which largely reduces their net payoff (expected prize
minus effort cost), such that their net payoff is lower. By contrast, if high-ability employees choose the small-spread
tournament, then they are likely to obtain the winner prize with lower effort. Therefore, when prize spread information is
available, high-ability employees, like low-ability employees, may prefer the small-spread tournament.

Motivation Effect Induced by Self-Selection


Prior literature on tournaments suggests that tournament contestants’ motivation to expend effort becomes stronger when
they are homogeneous in terms of ability (Lazear and Rosen 1981; Schotter and Weigelt 1992; Lynch 2005; Brown 2011). This
situation occurs because high-ability employees need to put in a substantial amount of effort to win when they are matched
against other high-ability contestants. Likewise, low-ability participants might be more motivated when matched against other
low-ability contestants because they expect a better chance to win (Brown 2011). In contrast, high-ability employees are less
motivated when matched against low-ability contestants, because low-ability competitors are less able and may even give up in
the end.
Employees in organizations may have the opportunity to learn whether their competitors are equally able through
performance feedback and/or personal interactions.5 If we presume that prize spread information facilitates sorting on ability,
then employees who select into tournaments may learn that they are competing with peers of comparable ability. This
knowledge can motivate them to deliver effort. Conversely, when prize spread information is unavailable and employees are
randomly assigned to tournaments, high-ability employees and low-ability employees are mixed together. These employees
may learn that they are not necessarily competing with comparable peers. If high-ability employees learn that they compete
against peers of lower ability, then they may exert less effort because of the complacency effect. Similarly, if low-ability
employees learn that they compete against peers of higher ability, then they may exert less effort because of the giving-up effect
(Libby and Lipe 1992; Casas-Arce and Martinez-Jerez 2009; Muller and Schotter 2010; Berger et al. 2013).
Taken together, because prize spread information creates a competition between employees with comparable ability levels,
we predict that employee effort is going to be higher in the presence of the prize spread information:6

4
According to the theory of Level k strategic thinking (see Crawford et al. [2013] for a review), the own payoff effect occurs when individuals exhibit
Level 1 strategic thinking. Prior studies show that a substantial portion of the population exhibit Level 1 strategic thinking; that is, they assume that
others behave randomly. Individuals with strategic thinking at Level 2 or higher levels will still choose the large-spread tournament as long as they
think that a substantial portion of the population behaves randomly.
5
The extent and the timeliness to which contestants can learn their competitors’ ability depend on the firms’ performance feedback system, as well as the
extent of interactions among contestants.
6
While we predict that self-selection will mitigate potential complacency and giving-up effects in tournaments, prior research has tried to address the
complacency and giving-up effects by using multiple rewards prize structures (e.g., Freeman and Gelber 2010; Newman and Tafkov 2014), or repeated
tournaments whereby performance does not carry over from one tournament period to the next (e.g., Choi, Newman, and Tafkov 2016). Prior research
also documents that the percentage of competitors eligible to win a tournament can moderate the effects of ability heterogeneity on effort (e.g., Harbring
and Irlenbusch 2008; Knauer et al. 2017), such that complacency is more likely when there is a higher percentage of winners and giving up is more
likely when there is a low percentage of winners.

The Accounting Review


Volume 93, Number 4, 2018
132 Cardinaels, Chen, and Yin

H2a: Employee effort is higher when employees have the opportunity to select into tournaments of varying prize spreads
compared to when they are randomly assigned to tournaments of varying prize spreads.
Other things being equal, the large-spread tournament may offer stronger incentives than the small-spread tournament
and, as such, may induce greater efforts from employees (Lazear and Rosen 1981; Ehrenberg and Bognanno 1990). Given
the stronger motivational effect of the large-spread tournament, homogeneity in contestant ability induced by self-selection is
likely to increase the efforts of contestants to a larger extent in the large-spread tournament. In the small-spread tournament,
employees may be satisfied with receiving a relatively good loser prize without having to work hard (Lazear and Rosen
1981; Ehrenberg and Bognanno 1990). So even when employees are matched with contestants of comparable ability, the
increase in their effort may be limited. As such, we predict an interaction suggesting that availability of prize spread
information prior to selection, compared to the absence of such information, increases employee effort to a greater extent in
the large-spread tournament than in the small-spread tournament:
H2b: The effort effect as a result of the opportunity to select into tournaments of varying prize spreads, compared to
random assignment, is greater in the large-spread tournament compared to the small-spread tournament.
The above hypotheses are not without tension. First, in the large-spread tournament, it is possible that low-ability
employees are motivated by the strong competition and desire to emulate the high-ability employees, so low-ability employees
may not necessarily give up in the large-spread tournament when they are randomly assigned and happen to pair up with high-
ability employees. Second, in the small-spread tournament, some low-ability employees may still be intrinsically motivated to
win the tournament despite the relatively smaller winner prize and may not be satisfied with receiving a relatively good loser
prize without having to work hard.

III. METHOD

Participants
We recruited participants from undergraduate study programs at a large university in Singapore. In total, 148 students
participated in the experiment.7 We conducted six sessions and each one involved 16 to 30 participants. Participants were 21.98
years old, on average, and 38.36 percent were male. On average, they had taken 4.64 economics courses and worked for 9.36
months in part-time jobs, and 93.84 percent reported having had some work experience. They earned an average of 13.94
Singapore dollars.

Experimental Design and Manipulations


We employ a 2 3 2 between-subjects design that manipulates two factors. The first factor is the opportunity to select into
tournaments of varying prize spreads. Participants are randomly assigned to either the Self-Selection treatment or the Random
Assignment treatment. In the Self-Selection treatment, participants are informed about the two types of tournaments, a large-
spread tournament and a small-spread tournament. They know that the tournaments differ in the prize spread (the difference
between winner prize and loser prize), but have the same total prize (the sum of winner prize and loser prize). Participants then
select into either the large-spread or the small-spread tournament. The Self-Selection treatment, thus, represents an environment
in which employees have prize spread information prior to selecting into a firm. In the Random Assignment treatment,
participants are randomly assigned into one of the two types of tournaments and are unaware of the other type of tournament.8
This setting, thus, represents an environment where employees do not have prize spread information and, thus, cannot select
based on this information.
The second factor is the tournament prize spread. The large-spread tournament offers a winner prize of 800 points and
a loser prize of 200 points. The small-spread tournament offers a winner prize of 550 points and a loser prize of 450
points.9

7
This study received the approval of the Institutional Review Board of both Nanyang Technological University and the University of Illinois at Urbana–
Champaign.
8
We do not tell participants in the Random Assignment treatment about the existence of the other type of tournament because, in practice, it would be
uncommon for companies to disclose prize spread information, but prevent employees from using it in their selection. Consistent with Hales et al.
(2015), this design choice can further avoid ‘‘disappointment’’ effects. That is, if participants were aware of both types of tournaments, then those
assigned to the less desirable tournament may be disappointed and hence reduce effort.
9
We hold constant the total prize amount across tournaments in order to keep the expected prize amount for participants constant and isolate the effect of
tournament prize spread. This design choice can strengthen internal validity. In addition, this design choice is consistent with practice, where companies
often have to decide how to allocate a fixed amount of total wage budget or total bonus pool among top and bottom performers.

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 133

Similar to Hannan et al. (2013), we use an abstract effort task to maximize experimental control.10 Specifically, the task is a
two-person tournament competition in which the highest performer receives the winner prize, while the other receives the loser
prize. In case two persons realize the same performance, participants are informed that one is randomly selected to receive the
winner prize and the other gets the loser prize.
Participants’ performance is defined in the following Equation (1):
Performance ¼ Ability Parameter 3 Effort Level þ Noise ð1Þ
To capture differences in employee ability, we set the ability parameter to be either high (Effort multiplied by 3) or low
(Effort multiplied by 1). Participants know that the computer randomly assigns either high ability or low ability to every
participant and are clearly informed about the ability level they are endowed with. Consistent with prior studies, we chose to
assign ability levels to reduce confounding factors, such as variations in an individual’s overconfidence about his or her own
ability (Schotter and Weigelt 1992; Hannan, Kagel, and Moser 2002; Meidinger, Rulliere, and Villeval 2003; Charness and Kuhn
2007; Charness and Dufwenberg 2011; Cabrales, Charness, and Villeval 2011; Douthit 2014; Arnold, Hannan, and Tafkov 2015).
Participants choose their effort level, ranging from 1 to 30, which positively affects performance. The cost of effort is
quadratic: 3 3 e2/14. Appendix B presents the cost-of-effort table. Because performance is always a noisy indicator of effort in
the real world, we include noise in the performance formula to increase external validity. The noise is uniformly distributed
between 10 and 10.
Participants who enter into the same type of tournament are then randomly matched into pairs and compete with the other
member of their pair for ten periods by choosing an effort level. If an odd number of participants enters into one type of
tournament, we randomly select one participant and pay him or her to leave. After each period, participants individually learn
their own effort choice and performance, the performance of their competitor, and their earnings.
For the selection effect (H1), we examine whether high-ability participants, relative to low-ability participants, are more
likely to select into the large-spread tournament in the Self-Selection treatment. For the motivation effect, we examine whether
average effort is higher in the Self-Selection treatment than in the Random Assignment treatment (H2a) and whether this effort
difference is larger for the large-spread tournament (H2b).

Experimental Procedures
We conducted six computerized experimental sessions using z-Tree (Fischbacher 2007), three for each Self-Selection and
Random Assignment treatment. Each session lasted about one hour. Upon entering the computer lab, participants received a
ticket number to claim their payout at the end of the experiment. Based on the ticket number, participants in the Random
Assignment treatment were randomly assigned to one of the two types of tournaments and were not told about the existence of
the other type of tournament. To guarantee anonymity, participants never submit their names.
Participants first received instructions in hard copy, containing detailed information about the tournament, the performance
function, and the cost-of-effort function. Participants knew the computer would randomly assign either a high or low ability
level to every participant. After reading the instructions, participants moved on to work on the computer. They first received
information about their assigned ability, either high or low. Participants in the Self-Selection treatment next selected one type of
tournament to enter. To enhance understanding, we had participants answer a set of questions, and they could proceed only
after answering all questions correctly.
Participants were then randomly paired with a participant who entered into the same type of tournament. Each period,
participants had to choose an effort level between 1 and 30, with increments of 1. They could refer to the cost-of-effort table in the
hard copy instructions when choosing the effort level (see Appendix B). The person with a higher performance won the
competition and received the winner prize, while the other person earned the loser prize. If the performance of two contestants was
the same, then one was randomly assigned to the winner prize, while the other received the loser prize. At the end of each period,
all participants received feedback about the tournament outcome, their own effort choice and performance, and the competitors’
performance. Following prior literature (Cason et al. 2010), we randomly draw one round from the ten rounds to determine the
cash reward.11 Participants’ net payoff (prize earned minus cost of effort) in the randomly chosen round was converted into real
Singapore dollars at the exchange rate of 30 (30 points ¼ S$1).

10
Using a gift exchange experiment, Bruggen and Strobel (2007) show that chosen effort and real effort are equivalent. Yet, they show a significantly
higher variance in effort in the real effort task, suggesting that a real effort task may result in a lower level of experimental control. Other studies that
use abstract effort tasks include Frederickson (1992), Frederickson and Waller (2005), Hannan (2005), and Kuang and Moser (2009). That said, we
acknowledge the trade-off between internal validity and external validity in making this design choice. While an abstract effort task maximizes internal
validity, it reduces external validity because the higher variance associated with real effort tasks is a feature that does exist in the real world.
11
Paying participants for one randomly chosen round, instead of paying them based on total payoffs of all ten rounds (Choi et al. 2016), may increase
their motivation to expend effort in both the Self-Selection and the Random Assignment treatment. Yet, ex ante, we are unaware of any evidence that
this design choice would bias in favor of our prediction H2, which compares effort level across the two treatments.

The Accounting Review


Volume 93, Number 4, 2018
134 Cardinaels, Chen, and Yin

FIGURE 1
Illustration of Experimental Procedures of the Main Experiment

Participants also completed a post-experimental questionnaire measuring potential covariates, including risk aversion,
optimism, and competitive orientation, that may influence individuals’ tournament choice, as well as their effort choice. We
measure risk aversion with 11 questions adapted from Dahlback (1990) that ask participants to rate whether they agree with
statements on risk attitude (e.g., ‘‘I can be rather incautious and take big risks.’’) on a seven-point Likert scale. Following
Dahlback (1990), we use the principal-component method to extract the first component that accounts for the maximal amount
of variance in the answers to the 11 questions. To ease interpretation, we multiply the first component by 1. The higher the
measure, the more risk-averse individuals are. We measure optimism with six questions from Scheier, Carver, and Bridges
(1994),12 and competitive orientation with five questions on an individual’s inclination to accept competition adapted from
Ryckman, Hammer, Kaczor, and Gold (1990).13 In addition, we measure a mediating variable, perceived chance of winning, by
asking participants right after their selection choice whether they perceive a higher winning chance than the other member of
their pair. Finally, we collect demographic information, such as age, gender, major, and working experience. Figure 1 illustrates
our experimental procedures.

12
Participants rate six statements on optimism on a seven-point Likert scale (Cronbach’s alpha ¼ 0.63). An example of a statement is ‘‘In uncertain times,
I usually expect the best.’’ Factor analysis yields only one factor with an eigenvalue greater than 1, suggesting that the six items capture a single
construct. Hence, we sum the answers to the six items, with three reversely coded, to obtain the measure of optimism.
13
The five questions ask participants to rate whether the statements are true descriptions of themselves on a five-point Likert scale (Cronbach’s alpha ¼
0.59). An example of these statements is ‘‘Winning in competition makes me feel more powerful as a person.’’ Factor analysis yields only one factor
with an eigenvalue greater than 1, suggesting that these five items capture a single construct. Hence, we sum the answers to the five questions, with two
reversely coded, to obtain the measure of competitive orientation.

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 135

TABLE 1
Mean Comparison
Self-Selection Random Assignment
Small Large Total Small Large Total Diff.
(n ¼ 48) (n ¼ 34) (n ¼ 82) (n ¼ 30) (n ¼ 34) (n ¼ 64) (t-stat/v2)
High 0.271 0.794 0.488 0.500 0.500 0.500 0.012
Ability% (0.449)*** (0.410)*** (0.503) (0.509) (0.508) (0.504) (0.02)
EqualPairing% 0.708 0.706 0.707 0.533 0.353 0.438 0.270
(0.459) (0.462) (0.458) (0.507) (0.485) (0.500) (10.81)***
Effort 10.652 24.144 16.246 13.360 17.026 15.308 0.939
(8.066)*** (8.910)*** (10.730) (8.867)* (10.161)* (9.742) (0.56)
Payoff 461.850 358.059 418.815 445.040 415.862 429.539 10.724
(50.165)*** (299.385)*** (202.953) (71.233)* (302.487)* (226.115) (0.84)
***, **, * Indicate significance levels of 0.10, 0.05, and 0.01, respectively.
Bold indicates that the comparison of means between the large-spread tournament and the small-spread tournament, Mean (large-spread tournament) 
Mean (small-spread tournament), within each treatment is significant at conventional levels. The last column compares the difference between the Self-
Selection and Random Assignment treatments. Standard errors for the mean comparison for the variables, Effort and Payoff, are clustered on competition-
pair level.

Variable Definitions:
HighAbility% ¼ the percentage of participants with high ability in the condition;
EqualPairing% ¼ the percentage of participants in the condition who are paired with competitors of the same ability;
Effort ¼ the average level of effort chosen by participants; and
Payoff ¼ the average amount of points earned minus the cost of effort.

IV. RESULTS
We first test H1 (the selection effect) and the mechanisms underlying H1. We then present the tests of H2a and H2b (the
motivation effect) and the mechanisms underlying H2a and H2b. Finally, we discuss two supplemental experiments that
examine whether the selection effect (H1) is robust to alternative design choices.

Test of H1 (The Selection Effect)


H1 predicts that prize spread information produces a self-selection effect. As reported in Table 1, within the Self-Selection
treatment, 34 out of 82 participants select into the large-spread tournament.14,15 The percentage of high-ability participants is
higher in the large-spread tournament than in the small-spread tournament (79.4 percent versus 27.1 percent, v(1, N ¼ 82) ¼
21.81, two-tailed p , 0.01); thus, in line with H1.
We report one-tailed p-values for the tests of directional predictions (H1 and H2a and H2b) and two-tailed p-values for all
other tests. To formally test H1, we regress the participants’ choice of tournament, a dummy variable LargeSpread (1 for large-
spread tournament, and 0 for small-spread tournament), on their ability. Table 2 displays the results. We find a positive and
significant coefficient on HighAbility (z ¼ 4.35 without control variables and 4.26 with control variables, respectively, one-
tailed p , 0.01), suggesting that high-ability participants, relative to low-ability participants, are more likely to choose the
large-spread tournament. This result is robust to the inclusion of risk aversion, optimism, and competitive orientation as control
variables. Of three personality traits, only risk aversion is significant (z ¼ 2.80, two-tailed p , 0.01), suggesting that more
risk-averse participants are less likely to select the large-spread tournament.16

14
We do not have demographic information for two participants in the Self-Selection treatment who were paid to leave after their tournament choice due
to an odd number of participants in one tournament. We exclude them in all subsequent tests. Including them in the test of H1 does not change our
inferences.
15
Except for major (33 percent majored in economics in Random Assignment versus 60 percent in Self-Selection, v(1, N ¼ 146) ¼ 10.46, two-tailed p ,
0.01), participants in the Self-Selection treatment and the Random Assignment treatment are similar with respect to age, gender, and working
experience (smallest p . 0.51). Controlling for major and other demographics does not change inferences of our hypotheses.
16
Participants in the large- or small-spread tournament are similar with respect to demographics, including age, gender, major, school year, and working
experience (smallest p . 0.23), and with respect to knowledge and cognitive ability, including math, number of economics courses taken, strategic
thinking ability, whether they took part in economic experiments, and strategic thinking tendency (smallest p . 0.43).

The Accounting Review


Volume 93, Number 4, 2018
136 Cardinaels, Chen, and Yin

TABLE 2
Hypothesis Testing of the Self-Selection Effect (H1)

Logit regression results for:a


LargeSpread ¼ a0 þ a1 HighAbility þ a2 Optimism þ a3 RiskAversion þ a4 CompetitionOrientation þ e

Expected Coefficient Coefficient


Independent Variable Sign (z-stat) (z-stat)
Intercept 1.61 2.30
(3.86)*** (1.04)
HighAbility (H1) þ 2.34 2.63
(4.35)*** (4.26)***
Optimism 0.01
(0.15)
RiskAversion 0.46
(2.80)***
CompetitiveOrientation 0.05
(0.54)
No. Observations 82 82
Pseudo R2 0.21 0.30
Model Chi-square 18.96*** 22.82***
***, **, * Denote significance at the 0.01, 0.05, and 0.10 levels, based on one-tailed statistics for directional predictions, and two-tailed otherwise.
a
Robust standard errors are used.
This table presents Logit regression results on the participants’ choice of tournaments as a function of their ability levels.

Variable Definitions:
LargeSpread ¼ 1 if the tournament is large-spread, and 0 otherwise;
HighAbility ¼ 1 if the ability of the participant is high, and 0 otherwise;
RiskAversion ¼ measured by factor analysis of the answers to 11 questions related to risk attitude using the principal-component method. Risk aversion is
1 times the first component (Eigenvalue: 4.12) (Dahlback 1990). The higher this measure, the more risk-averse the participant;
Optimism ¼ the summing of answers to six questions related to optimism, with three of the answers reversely coded (Scheier et al. 1994). The higher this
measure, the more optimistic the participant; and
CompetitiveOrientation ¼ the summing of answers to five questions related to how strongly participants want to compete, with two of the answers
reversely coded (Ryckman et al. 1990). The higher this measure, the more competitive the participant.

Untabulated results further show that, on average, low-ability participants who choose the small-spread tournament earn
more than those low-ability participants who choose the large-spread tournament (M ¼ 455.03 versus M ¼ 198.60, t25 ¼ 5.20,
two-tailed p , 0.01). Similarly, high-ability participants also earn more from choosing the small-spread tournament as
compared to the large-spread tournament (M ¼ 480.22 versus M ¼ 399.40, t24 ¼ 2.24, two-tailed p , 0.05). So selecting the
large-spread tournament yields lower payoffs for high-ability participants.
To provide evidence on whether the selection effect would extend to a multi-period game, we had participants select into
tournaments again in a hypothetical scenario. Specifically, we asked participants to answer the following question: ‘‘If you were
given another chance to select, would you select into the same Firm?’’ in the post-experiment questionnaire. About 85 percent
of participants (41 out of 48) in the small-spread tournament and 67 percent (23 out of 34) in the large-spread tournament
indicated that they would select the same tournament again if given a second opportunity. Moreover, untabulated regression
results controlling for participants’ tournament performance (i.e., number of times they win) show that the first selection
decision explains their second selection decision (z ¼ 4.48, two-tailed p , 0.01). This result shows that their tournament choice
is sticky, consistent with prior research suggesting that many participants fail to incorporate others’ strategy even in a repeated
game (e.g., Ball et al. 1991).17
Collectively, these results are consistent with individuals not fully incorporating the choices of other tournament
contestants when selecting between tournaments with different prize spreads.

17
In repeated tournaments in which participants can select tournaments in every period, the absence of information about the payoff that participants
would realize in the other tournaments makes it still difficult for participants to recognize that they did not make a correct tournament selection decision
(i.e., a decision based on full strategic thinking) in a prior period.

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 137

FIGURE 2
Test of the Mechanism Underlying the Selection Effect

***, **, * Denote significance at the 0.01, 0.05, and 0.10 levels, two-tailed.
This figure presents the results of a mediation test of the mechanism underlying the selection effect, namely, ability level leads to perceived winning
chance, which leads to choice of the tournament. This analysis uses path analysis for the Self-Selection subsample (n ¼ 82). All paths displayed in this
figure are estimated, and they are estimated jointly using the quasi-maximum likelihood method (QML). The standardized path coefficient and
corresponding significance levels of the coefficients are shown next to each path. The paths with coefficients significant at the 0.10 level or less are
depicted in solid lines, and other paths are in dotted lines. We calculate goodness-of-fit of this model using the standardized root mean square of the
residual (SRMR). The SRMR of the model is 0.00; values lower than 0.10 are considered to be favorable (Bentler 1995; Weston and Gore 2006).

Variable Definitions:
HighAbility ¼ an indicator variable that equals 1 if the ability of the participant is high, and 0 otherwise;
LargeSpread ¼ an indicator variable that equals 1 if the prize spread of the tournament is large, and 0 otherwise; and
PerceivedWinning ¼ an indicator variable that equals 1 if the participants perceive that they have a higher winning chance than their pair members, and 0
otherwise.

Test of Mechanisms Underlying the Selection Effect


Our theory for H1 suggests that the perceived chance of winning the tournament drives participants’ selection of
tournaments. Specifically, due to limited strategic thinking, high-ability persons anchor on ‘‘unconditional own payoff’’ and,
thus, the high winner prize, assuming a random selection of tournament by others. To test this theory, we asked participants in
the self-selection treatment right after their tournament choice to choose among three reasons for their tournament selection:
they perceive a higher winning chance than the pair member, they perceive a lower winning chance than the pair member, and
they perceive an equal winning chance as the pair member.
The results are consistent with the proposed mechanism underlying H1. Participants selecting the large-spread tournament,
relative to those selecting the small-spread tournament, are more likely to perceive a higher winning chance (85 percent versus
23 percent; v(2, N ¼ 82) ¼ 35.41, two-tailed p , 0.01).18 Also, consistent with our expectation, high-ability participants,
relative to low-ability participants, are more likely to perceive a higher winning chance (78 percent versus 21 percent; v(2, N ¼
82) ¼ 29.00, two-tailed p , 0.01).
We formally test this mechanism by examining whether the perceived winning chance mediates the effect of the ability
level on selection. Results of the path analyses displayed in Figure 2 support a partial mediation. Besides the direct effect (z ¼
2.07, two-tailed p , 0.05), the ability level indirectly influences tournament selection through the participants’ perception, as
both the path coefficient from high ability to the perceived chance of winning and the coefficient from the perceived winning
chance to the selection into large-spread tournament are positive and significant (z ¼ 6.09 and 3.96, two-tailed p , 0.01). The
impact of the ability level on tournament selection is reduced after incorporating the mediator (coefficient decreases from 0.51
to 0.25). The Sobel test also supports this partial mediation (Sobel z ¼ 3.66, two-tailed p , 0.01).
In sum, our results provide support for H1. The availability of the prize spread information prior to selection sorts
participants based on their ability. The effect of ability on tournament selection is partially mediated by the participants’
perception of winning.

Test of H2a and H2b (The Motivation Effect)


H2a and H2b predict the motivational effect of the availability of prize spread information. As reported in Table 1, the
average effort is significantly higher in the Self-Selection treatment than in the Random Assignment treatment for the large-
spread tournament (M ¼ 24.144 versus M ¼ 17.026, t32 ¼ 3.42, two-tailed p , 0.01), but lower (although statistically
insignificant) for the small-spread tournament (M ¼ 10.652 versus M ¼ 13.360, t37 ¼ 1.61, two-tailed p ¼ 0.12).

18
Participants who perceived a higher winning chance and selected into the large-spread tournament won 5.3 out of the ten periods, on average, which is
statistically similar to 50 percent (t28 ¼ 0.57, two-tailed p ¼ 0.57). This result suggests that these participants’ perception is wrong from an ex post
perspective. On the other hand, participants who perceived a higher winning chance and selected into the small-spread tournament won most of the time
(7.27 out of 10.0 periods).

The Accounting Review


Volume 93, Number 4, 2018
138 Cardinaels, Chen, and Yin

To formally test H2a and H2b, we regress effort on the two manipulated factors, SelfSelect (1 for Self-Selection treatment,
0 for Random Assignment treatment) and LargeSpread, and the interaction term between SelfSelect and LargeSpread, together
with the ability dummy (1 for high ability, 0 for low ability) and the main effect of period.19 Because we have participants
compete with the same pair members for ten periods, we cluster standard errors on the competition-pair level to control for the
correlation between observations from the same pair over ten periods (Peterson 2009).20
Panel A of Table 3 displays the results. Because we find a significant coefficient on the interaction term SelfSelect 3
Tournament, we cannot use the coefficient on SelfSelect to interpret the main effect of SelfSelect. Therefore, we use the results
of the simple effect analysis reported in Panel B of Table 3 to interpret the main effect of SelfSelect. Results show that while the
opportunity to select into tournaments of varying prize spreads has a significantly positive effect on effort for participants in the
large-spread tournament (t30 ¼ 2.68, one-tailed p , 0.01), it has an insignificant effect on effort in the small-spread tournament
(t36 ¼ 1.51, one-tailed p ¼ 0.93). These results do not provide evidence for the main effect of the opportunity to select into
tournaments of varying prize spreads on employee effort, so we do not find support for H2a.
However, the significant interaction effect documented in Panel A of Table 3 (t69 ¼ 3.09, one-tailed p , 0.01) and the
significant simple effect of SelfSelect in the large-spread tournament documented in Panel B of Table 3 provide support for
H2b, suggesting that the positive motivation effect of the opportunity to select into tournaments of varying prize spreads
depends on prize spread.
Because risk aversion influenced participants’ tournament selection, we examine whether risk aversion influences
participants’ effort. Untabulated results show that risk aversion does not affect effort (t68 ¼ 0.61, two-tailed p ¼ 0.54). Risk
aversion also does not change our inferences because the interaction term in Table 3, Panel A remains significant at similar
levels of significance.21

Test of Mechanisms Underlying the Motivation Effect


Self-Selection Opportunity and Agent Homogeneity in Tournaments
Our theory proposes that the availability of self-selection opportunity increases the motivation to expend effort by creating
more homogeneous contestant pools. To test this mechanism, we use a mediation model with SelfSelect as the independent
variable, Effort as the dependent variable, and EqualPairing (1 if a participant is matched with a pair member with the same
ability level, and 0 otherwise) as the mediator. As shown in Figure 3, we find support for mediation. Specifically, the presence
of self-selection opportunity increases the probability of equal-ability pairing tournaments (z ¼ 2.38, two-tailed p , 0.05), and
tournaments with equal-ability pairing elicit higher efforts than tournaments without equal-ability pairing (z ¼ 4.02, two-tailed p
, 0.01). Note that the effect of SelfSelect on effort is fully mediated by EqualPairing, suggesting that the motivation effect of
Self-Selection is driven primarily by the availability of Self-Selection opportunity leading to more homogeneous contestant
pools. Untabulated results further show that the interactive effect between EqualPairing and LargeSpread in the path model in
Figure 3 on effort is positive and significant (z ¼ 4.93, two-tailed p , 0.01), suggesting that the motivation effect of equal-
ability pairing is larger for the large-spread tournament, consistent with H2b.22,23
In sum, making prize spread information available prior to selection leads to greater homogeneity in ability among the
tournament contestants, which, in turn, enhances tournament contestants’ motivation to expand effort.

19
We only include the main effect of period in the model, as none of the interactions of our manipulated factors with period, including the interaction
between period and SelfSelect, the one between period and LargeSpread, and the three-way interaction, are statistically significant (smallest p . 0.44).
20
Inferences regarding our hypotheses are unchanged if we use repeated measures ANOVA or if we use the average effort of participations in the same
competition pair (n ¼ 73 pairs) over ten periods as the dependent variable.
21
Our results on the motivational effect still hold after controlling for fairness perception. We measure fairness perception by asking participants to rate
the fairness of the incentive scheme and the fairness of the competition against their pair members (Brown et al. 2016). Untabulated results show that
the coefficient of the interaction term LargeSpread 3 SelfSelect remains significant after controlling for the two fairness perception measures (t66 ¼
3.65, one-tailed p , 0.01), consistent with H2b. Also, for the simple effects, selection opportunity increases effort for large-spread tournament (t29 ¼
2.80, one-tailed p , 0.01).
22
Results also show that the effect of equal pairing on effort is stronger for participants who correctly inferred their pair member’s ability. We classify
participants into those who correctly inferred pair member ability by looking at their answer to a post-experimental question that asks them to guess
their pair member’s ability level. We include an interaction term between EqualPairing and the dummy indicator on whether participants correctly infer
pair member ability into the mediation model. Untabulated results document a positive and significant interaction term (z ¼ 2.98, two-tailed p , 0.01),
consistent with this intuition.
23
We further check when EqualPairing starts to influence the participants’ effort decision. Untabulated results show that EqualPairing does not affect
effort in the first period, presumably because participants have not yet received performance feedback (t67 ¼0.38, two-tailed p ¼ 0.71). EqualPairing
exerts a positive influence on effort starting from Period 2, when employees receive the first performance feedback (t67 ¼ 3.62, two-tailed p , 0.01).
These results suggest that the motivational effect of homogeneous contestants does not manifest itself until employees have the opportunity to learn
their competitors’ type.

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 139

TABLE 3
Hypothesis Testing of the Motivation Effect (H2a and H2b)

Panel A: Regression Results for:a


Effort ¼ a0 þ a1 LargeSpread þ a2 SelfSelect þ a3 LargeSpread 3 SelfSelect þ a4 HighAbility þ e

Expected Coefficient
Independent Variable Sign (t-stat)
Intercept 13.17
(8.54)***
LargeSpread 3.67
(1.92)*
SelfSelect 1.88
(1.19)
LargeSpread 3 SelfSelect (H2b) þ 7.93
(3.09)***
HighAbility 3.62
(3.25)***
Period 0.29
(2.82)***
No. Observations 1,460
R2 0.28
Model F-statistic 25.38***

Panel B: Simple Effect of Self-Selection:a


Effort ¼ a0 þ a1 SelfSelect þ a2 HighAbility þ e

Small-Spread Large-Spread
Expected Coefficient Coefficient
Independent Variable Sign (t-stat) (t-stat)
Intercept 14.40 15.36
(8.57)*** (10.54)***
SelfSelect þ 2.38 5.28
(1.51) (2.68)***
HighAbility 1.44 6.24
(0.96) (3.77)***
Period 0.32 0.27
(2.24)** (1.71)*
No. Observations 780 680
R2 0.04 0.21
Model F-statistic 3.03** 17.88***
***, **, * Denote significance at the 0.01, 0.05, and 0.10 levels, based on one-tailed statistics for directional predictions, and two-tailed otherwise.
a
Standard errors are clustered on competition-pair level.
This table presents regression results on the participants’ effort decisions as a function of self-selection opportunity, tournament prize spread, and ability
levels.

Variable Definitions:
HighAbility ¼ 1 if the ability of the participant is high, and 0 otherwise;
SelfSelect ¼ 1 if the participant is in the Self-Selection treatment, and 0 otherwise; and
LargeSpread ¼ 1 if the tournament is large-spread, and 0 otherwise.

The Accounting Review


Volume 93, Number 4, 2018
140 Cardinaels, Chen, and Yin

FIGURE 3
Test of the Mechanism Underlying the Motivation Effect

***, **, * Denote significance at the 0.01, 0.05, and 0.10 levels, two-tailed.
This figure presents the results of a mediation test of the mechanism underlying the motivation effect, namely, the self-selection opportunity generates
more homogeneous tournaments, which strengthens the motivation to deliver effort. This analysis uses path analysis for the full sample that includes the
two treatments (n ¼ 1,460). All paths displayed in this figure are estimated, and they are estimated jointly using the quasi-maximum likelihood method
(QML). We cluster standard errors on the competition-pair level. The standardized path coefficient and corresponding significance level of the coefficients
are shown next to each path. The paths with coefficients significant at the 0.10 level or less are depicted in solid lines, and other paths are in dotted lines.
We calculate goodness-of-fit of this model using the standardized root mean square of the residual (SRMR). The SRMR of the model is 0.00; values lower
than 0.10 are considered to be favorable (Bentler 1995; Weston and Gore 2006).

Variable Definitions:
EqualPairing ¼ 1 if the participant is matched with a pair member with the same ability level, and 0 otherwise;
SelfSelect ¼ 1 if the participant is in the Self-Selection treatment, and 0 otherwise; and
Effort ¼ the level of effort chosen by participants in the experiment.

Self-Selection and the Giving-Up Effect and Complacency Effect


When high-ability employees and low-ability employees are mixed together, they may exert less effort once they learn that
they are competing with peers with higher ability (i.e., the giving-up effect) or peers with lower ability (i.e., the complacency
effect). However, making prize spread information available prior to selection can lead to greater homogeneity of tournament
contestants. We, therefore, explore if the opportunity to select based on prize spread information can mitigate the complacency
and giving-up effects.
First, we examine the giving-up effect and focus on low-ability participants because they are more susceptible to the
giving-up effect (Libby and Lipe 1992; Casas-Arce and Martinez-Jerez 2009; Muller and Schotter 2010; Berger et al. 2013).
We regress effort on EqualPairing and LateHalf (which indicates whether the period is among the last five periods out of the
ten periods) and an interaction between the two variables. We also control for prize spread (LargeSpread) and the self-selection
opportunity (SelfSelect). Results in Panel A of Table 4 support our expectation. We find a significantly negative coefficient on
LateHalf (t46 ¼ 2.36, two-tailed p , 0.05), consistent with the giving-up effect. More importantly, the positive interaction
term between EqualPairing and LateHalf suggests that the giving-up effect is mitigated when low-ability participants are
matched with low-ability pair members (t46 ¼ 2.32, two-tailed p , 0.05). Next, we turn to the complacency effect for high-
ability participants, as they are more susceptible to the complacency effect (Casas-Arce and Martinez-Jerez 2009; Berger et al.
2013). Similar to the test above, we regress effort on EqualPairing and LateHalf and an interaction between the two variables,
after controlling for SelfSelect and LargeSpread. Results in Panel B of Table 4 support our expectation. The negative coefficient
of LateHalf indicates a complacency effect (t45 ¼5.03, two-tailed p , 0.01). The positive coefficient on the interaction term
between EqualPairing and LateHalf suggests that the complacency effect is mitigated when high-ability participants are
matched with high-ability pair members (t45 ¼ 2.28, two-tailed p , 0.05).
In sum, these results support the theoretical arguments suggesting that the giving-up tendency of low-ability participants
and the complacency tendency of high-ability participants are mitigated when pair members have similar ability.

Supplemental Experiments: Robustness of the Selection Effect


A potential concern with our main experiment is that the selection effect of the availability of prize spread information
(H1) could be driven by the parameters of research design. First, winners and losers, respectively, earned 550 and 450 in the
small-spread tournament, compared with 800 and 200 in the large-spread tournament. Thus, from a low-ability participant’s
perspective, the small-spread tournament is financially more attractive than the large-spread tournament. These design
parameters could potentially bias toward finding support for H1. Second, to create distinct levels of ability for the participants,
we set the ability parameter at 1 for low-ability participants and at 3 for high-ability participants in the main experiment. The
large difference between the high-ability and low-ability participants may make it easier to find support for H1.

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 141

TABLE 4
Self-Selection and Giving-Up Effect and Complacency Effect

Panel A: Regression Results for:


Effort ¼ a0 þ a1 LateHalf þ a2 EqualPairing þ a3 EqualPairing  LateHalf þ a4 SelfSelect þ a5 LargeSpread þ e
for Low-Ability Participantsa
Expected
Independent Variable Sign Coefficient t-statistic
Intercept 12.00 5.63***
LateHalf  4.13 2.36**
EqualPairing 0.43 0.22
EqualPairing  LateHalf þ 4.37 2.32**
SelfSelect 1.20 0.72
LargeSpread 4.99 2.49**
No. Observations 740
R2 0.08
Model F-statistic 3.99***

Panel B: Regression Results for:


Effort ¼ a0 þ a1 LateHalf þ a2 EqualPairing þ a3 EqualPairing  LateHalf þ a4 SelfSelect þ a5 LargeSpread þ e
for High-Ability Participantsa
Expected
Independent Variable Sign Coefficient t-statistic
Intercept 9.48 6.49***
LateHalf  3.07 5.03***
EqualPairing 8.09 5.34***
EqualPairing  LateHalf þ 2.82 2.28**
SelfSelect 0.63 0.43
LargeSpread 9.02 5.78***
No. Observations 720
R2 0.54
Model F-statistic 91.29***
***, **, * Denote significance at the 0.01, 0.05, and 0.10 levels, based on one-tailed statistics for directional predictions, and two-tailed otherwise.
a
Standard errors are clustered on competition-pair level.
This table presents regression results on the participants’ effort decisions as a function of timing (the first five periods versus the last five periods), equal-
ability pairing or not, self-selection opportunity, and tournament prize spread for low-ability participants and high-ability participants separately.

Variable Definitions:
EqualPairing ¼ 1 if the participant is matched with a pair member with the same ability level, and 0 otherwise;
SelfSelect ¼ 1 if the participant is in the Self-Selection treatment, and 0 otherwise;
LargeSpread ¼ 1 if the tournament is large-spread, and 0 otherwise; and
LateHalf ¼ 1 if the period is among the last five periods, and 0 otherwise.

Therefore, we conducted two supplemental experiments with more conservative parameter choices to replicate the H1
results in our main experiment. Specifically, in the first supplemental experiment (hereafter, Spread Experiment), winners and
losers earned 600 and 400 in the small-spread tournament, respectively, compared with 800 and 200 in the large-spread
tournament. In the second supplemental experiment (hereafter, Ability Experiment), we set the ability parameter at 1 for low-
ability participants and at 2 for high-ability participants. With this one exception, we used the same set of instructions and

The Accounting Review


Volume 93, Number 4, 2018
142 Cardinaels, Chen, and Yin

experimental materials as the Self-Selection treatment in our main experiment.24 Given the focus on the robustness of H1
results, we did not run the random assignment condition in the supplemental experiments. Table 5 displays the results.
Panel A of Table 5 shows that for both supplemental experiments, the percentage of high-ability participants is
significantly higher in the large-spread tournament than in the small-spread tournament (77.80 percent versus 36.80 percent in
the Spread Experiment, v(1, N ¼ 56) ¼ 8.19, two-tailed p , 0.01; 78.10 percent versus 25.00 percent in the Ability Experiment,
v(1, N ¼ 76) ¼ 20.97, two-tailed p , 0.01). Panel B of Table 5 shows the results of a logit regression that formally tests H1. As
in the main experiment, the coefficient of HighAbility is positive and significant in both supplemental experiments (Spread
Experiment: z ¼ 2.69 and 2.49, two-tailed p , 0.01; Ability Experiment: z ¼ 4.27 and 3.49, two-tailed p , 0.01). These results
are consistent with H1. High-ability participants, relative to low-ability participants, are more likely to select the large-spread
tournament. Moreover, we replicate the path analysis results for the mechanisms underlying H1 using the two supplemental
experiments. Untabulated results show that high-ability participants are more likely to perceive a higher chance of winning the
tournament (path coefficients: 0.55 [Spread Experiment] and 0.42 [Ability Experiment], z ¼ 6.48 and 4.68, two-tailed p ,
0.01) than low-ability participants, and the perception of a higher winning chance drives them to select the large-spread
tournament (path coefficients: 0.52 [Spread Experiment] and 0.57 [Ability Experiment], z ¼ 4.42 and 7.69, two-tailed p ,
0.01).
To shed further light on the mediating role of participants’ focus on the winner prize versus the loser prize, we use data
from the two supplemental experiments to estimate a more complete sequential mediation model, with the perception of
winning chance and the focus on the winner or the loser prize as two mediators that sequentially mediate the effect of ability on
tournament selection.25 Results displayed in Figure 4 further lend support to our theory. We find that HighAbility is positively
associated with the perception of higher winning chance (z ¼ 5.16 [Spread Experiment] and 4.03 [Ability Experiment], two-
tailed p , 0.01), which, in turn, is positively associated with the focus on winner prize (z ¼ 4.20 [Spread Experiment] and 3.71
[Ability Experiment], two-tailed p , 0.01). The two mediators are both positively associated with the choice of the large-spread
tournament (perception of winning chance: z ¼ 1.80 [Spread Experiment] and 3.99 [Ability Experiment], two-tailed p , 0.10
and p , 0.01; focus on winner prize: z ¼ 3.43 [Spread Experiment] and 2.45 [Ability Experiment], two-tailed p , 0.01 and p ¼
0.01). Similarly, the perception of winning chance reduces the focus on loser prize, which sequentially mediates the effect of
high ability on tournament selection (perception to focus on loser prize: z ¼ 2.27 [Spread Experiment] and 3.55 [Ability
Experiment], two-tailed p , 0.05 and p , 0.01; focus on loser prize to selection: z ¼ 5.09 [Spread Experiment] and 3.29
[Ability Experiment], two-tailed p , 0.01).26,27
Collectively, the supplementary experiments show that results of H1 are robust to alternative prize spread parameters, as
well as alternative ability parameters.

V. CONCLUSIONS
Many firms administer wage policies based on tournament competition. However, the disclosure of pay attached to
different ranks remains controversial (Lawler 2012). Our study shows that the disclosure of the prize spread information can
facilitate separation of employees based on ability, which enhances employees’ motivation to expend effort in tournaments.
Rather than maintaining a policy of pay secrecy, firms may derive benefits from disclosing the prize spread information. The

24
We recruited an additional 136 undergraduate student volunteers from the same university: 56 for the Spread Experiment and 80 for the Ability
Experiment (four participants from two sessions were paid to leave because of an odd number of participants in one tournament, leaving us with 76
participants). Participants in the two supplemental experiments are similar to those in the main experiment (smallest p . 0.17), including age, gender,
and work, except that the Spread Experiment includes more male participants (59 percent versus 38 percent, v(1, N ¼ 202) ¼ 6.95, two-tailed p , 0.01).
25
We measure the focus on the winner (loser) prize by asking participants right after their selection choice to rate, on a seven-point Likert scale, the extent
to which they agree with the following statement: ‘‘I paid more attention to the high payment (the low payment) when I made my choice between Firm
A and Firm B.’’
26
We also validated whether participants exercise a moderate level of strategic thinking in the two supplemental experiments. We asked participants to
answer the following question: ‘‘Please rate the extent to which you considered how other participants would choose between Firm A and Firm B when
you made your selection between the two firms’’ on a scale of 1 to 5 (1 ¼ not at all, 2 ¼ slightly, 3 ¼ moderately, 4 ¼ very, 5 ¼ extremely). On average,
participants indicate that they consider others’ choice to a moderate extent (Spread Experiment: M ¼ 3.02; Ability Experiment: M ¼ 2.99).
27
We also use the two supplemental experiments to rule out the alternative explanation for H1. That is, the selection occurs because the large-spread
tournament is perceived as a riskier choice, and high-ability participants, relative to low-ability participants, are more willing to take risks (Mittal, Ross,
and Tsiros 2002). Untabulated results do not support this alternative explanation. Participants’ perceived riskiness of the large-spread tournament
compared to the small-spread tournament does not differ between participants selecting the large-spread tournament and those selecting the small-
spread tournament (Spread Experiment: M ¼ 5.78 versus M ¼ 5.76, t54 ¼ 0.04, two-tailed p ¼ 0.97; Ability Experiment: M ¼ 6.25 versus M ¼ 6.11, t74 ¼
0.62, two-tailed p ¼ 0.54), and high-ability participants are not more likely than low-ability participants to perceive the large-spread tournament as
being less risky (Spread Experiment: M ¼ 6.14 versus M ¼ 5.39, t54 ¼ 2.15, two-tailed p , 0.05; Ability Experiment: M ¼ 6.22 versus M ¼ 6.13, t74 ¼
0.45, two-tailed p ¼ 0.66).

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 143

TABLE 5
Hypothesis Testing of the Self-Selection Effect (H1) Using Two Supplemental Experiments

Panel A: Comparison of Means


Spread Experimenta Ability Experimentb
Small Large Total Small Large Total
(n ¼ 38) (n ¼ 18) (n ¼ 56) (n ¼ 44) (n ¼ 32) (n ¼ 76)
High 0.368 0.778 0.500 0.250 0.781 0.500
Ability% (0.489)*** (0.428)*** (0.504) (0.438)*** (0.420)*** (0.503)
EqualPairing% 0.474 0.556 0.500 0.500 0.563 0.526
(0.506) (0.511) (0.505) (0.506) (0.504) (0.503)
Effort 14.461 21.522 16.730 12.170 22.925 16.699
(8.762)*** (10.475)*** (9.904) (8.066)*** (8.646)*** (9.863)
Payoff 438.811 377.333 419.050 454.430 371.363 419.454
(100.932) (300.086) (192.738) (57.987)** (290.687)** (197.846)

Panel B: Logit Regression Results for:c


LargeSpread ¼ a0 þ a1 HighAbility þ a2 Optimism þ a3 RiskAversion þ a4 CompetitionOrientation þ e

Spread Experiment Ability Experiment


Coefficient Coefficient
Independent Variable (z-stat) (z-stat)
Intercept 1.79 6.27 1.55 0.40
(3.29)*** (2.68)*** (3.70)*** (0.15)
HighAbility 1.79 1.78 2.37 2.11
(2.69)*** (2.49)*** (4.27)*** (3.49)***
Optimism 0.09 0.09
(1.39) (1.41)
RiskAversion 0.17 0.63
(0.90) (3.04)***
CompetitionOrientation 0.14 0.07
(1.45) (0.66)
No. Observations 56 56 76 76
Pseudo R2 0.12 0.22 0.21 0.33
Model Chi-square 7.26*** 10.99** 18.25*** 21.00***
***, **, * Denote significance at 0.01, 0.05, and 0.10 levels, respectively (two-tailed).
Bold indicates that the comparison of means between the large-spread tournament and the small-spread tournament, Mean (large-spread tournament) 
Mean (small-spread tournament), within each treatment is significant at conventional levels. Standard errors for the mean comparison for the variables,
Effort and Payoff, are clustered on competition pair level.
a
The Spread Experiment refers to the supplemental experiment in which we use the same instructions and experimental materials as the main experiment
except for the prize spread of the small-spread tournament. Specifically, winners and losers, respectively, earned 600 and 400 in the small-spread
tournament.
b
The Ability Experiment refers to the supplemental experiment in which we use the same instructions and experimental materials as the main experiment
except for the ability level of participants. Specifically, we set the ability parameter at 1 for low-ability participants and at 2 for high-ability participants.
c
Robust standard errors are used.
This table displays the summary statistics and comparison of means in Panel A, and the test of the selection hypothesis (H1) in Panel B for the two
supplemental experiments.
See Table 1 for the definition of HighAbility%, EqualPairing%, LargeSpread, and PerceivedWinning, and the three covariates in Panel B.

selection of employees is important for firms (Campbell 2012). When organizations want to attract talent or require different
talent for various jobs, the disclosure of prize spread information may be beneficial.
Our results further suggest that in environments in which tournament prize spread information across companies or
business units is made available to potential employees, firms can increase the tournament prize spread to increase the
motivation effect, while holding the total bonus constant. We note, however, that prior research shows that increasing prize
spread in tournaments can have undesirable effects, such as increase in sabotage (e.g., Harbring and Irlenbusch 2011).

The Accounting Review


Volume 93, Number 4, 2018
144 Cardinaels, Chen, and Yin

FIGURE 4
Test of the Sequential Mediation Model in the Two Supplemental Experiments

Panel A: Results for the Spread Experiment

Panel B: Results for the Ability Experiment

***, **, * Denote significance at the 0.01, 0.05, and 0.10 levels, two-tailed.
This figure presents the results of a sequential mediation model for the two supplemental experiments. This analysis uses path analysis for the Self-
Selection subsample. All paths displayed in this figure are estimated, and they are estimated jointly using the quasi-maximum likelihood method (QML).
The standardized path coefficient and corresponding significant levels of the coefficients are shown next to each path. The paths with coefficients
significant at the 0.10 level or less are depicted in solid lines, and other paths are in dotted lines. We calculate goodness-of-fit of this model using the
standardized root mean square of the residual (SRMR). The SRMR of the models in Panel A and Panel B are both 0.00; values lower than 0.10 are
considered to be favorable (Bentler 1995; Weston and Gore 2006).

Variable Definitions:
HighAbility ¼ an indicator variable that equals 1 if the ability of the participant is high, and 0 otherwise;
LargeSpread ¼ an indicator variable that equals 1 if the prize spread of the tournament is large, and 0 otherwise;
PerceivedWinning ¼ an indicator variable that equals 1 if the participants perceive that they have a higher winning chance than their pair
members, and 0 otherwise;
FocusOnWinnerPrize ¼ a measure of a PEQ question that asks the participants to rate their agreement, on a seven-point Likert scale, to a
statement that they paid more attention to the winner prize in making the tournament selection; and
FocusOnLoserPrize ¼ a measure of a PEQ question that asks the participants to rate their agreement, on a seven-point Likert scale, to a
statement that they paid more attention to the loser prize in making the tournament selection.

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 145

Therefore, companies should carefully consider both the costs and benefits of increasing the prize spread in tournaments.
Specifically, there may be limited sabotage opportunities in firms with strong management control systems and more sabotage
opportunities in firms with weak management control systems. Our study suggests that firms should use tournaments with large
prize spread and disclose such information if sabotage opportunities are limited.
In addition, we note that there may be contextual factors that influence the likelihood of observing the sorting effect we
document in our study. For example, high-ability individuals who value a work/life balance may not be attracted to firms with a
large prize spread because they may interpret such a prize spread as a more demanding work environment. Also, some
companies (e.g., Whole Foods Market Inc.) may deliberately choose a smaller prize spread as part of a culture of pay equity
across employees at different levels of hierarchy. To the extent that this culture of pay equity is valued by some individuals,
these individuals may still choose to work for such companies, regardless of their ability level.
Finally, the results of our study are likely to be applicable to business settings where promotion criteria are primarily
performance-based, or to companies that create anticipation among workers that performance is used for promotions by not
being explicit about the promotion rule (Chan 2018). In such settings, prize spread disclosure may lead to sorting, as we
document. Yet, when future employees would have access to information about the promotion rule (e.g., promotion based on
seniority or merit), other forms of sorting may arise where low-ability people may prefer organizations that may grant
promotion based on seniority, whereas high-ability employees may prefer organizations that grant promotion on merit.
Our study suggests interesting directions for future research. First, we examine a setting where contestants interact
repeatedly in the same tournament after being assigned or selecting into the tournament. We rely on the assumption that after
selection, employees tend to stay in the same organization because of insufficient information and inertia and the costs of
switching jobs. Supplementary analyses confirm that the selection decision of participants is sticky, indicating that the potential
motivation effect from selection is persistent. Future research can explore whether our results extend to environments in which
participants receive feedback about payoffs in other competing tournaments or in environments in which the costs of switching
jobs are relatively low.
Second, while we focus on selection as a way to generate a homogeneous workforce in terms of ability, companies can use
other ways to generate a homogeneous workforce. For example, handicapping, such as setting more challenging targets for
stronger contestants, may encourage both strong and weak contestants to expend more effort (e.g., Nalebuff and Stiglitz 1983).
Future research can examine whether granting managers discretion in setting targets or building identity among employees
(Kelly and Presslee 2017) enhances employee motivation to deliver effort in a tournament setting.
Finally, our results show that contestants adjust their effort based on competitors’ ability. The extent to which contestants
can learn their competitors’ ability and how fast they learn it may depend on the level of detail and frequency of firms’
performance feedback system. Under tournament schemes, if firms have a relatively homogeneous employee pool with respect
to ability, then providing detailed performance feedback and increasing feedback frequency may help employees learn peers’
ability and enhance employee motivation (Hannan et al. 2008; Tafkov 2013).

REFERENCES
Arnold, M. C., R. L. Hannan, and I. D. Tafkov. 2015. Non-Verifiable Communication in Homogeneous and Heterogeneous Teams.
Working paper, Universität Bern. Available at: https://www.researchgate.net/publication/276255761_Non-Verifiable_
Communication_in_Homogeneous_and_Heterogeneous_Teams
Audas, R., T. Barmby, and J. Treble. 2004. Luck, effort, and reward in an organizational hierarchy. Journal of Labor Economics 22 (2):
379–395. https://doi.org/10.1086/381254
Azmat, G., and M. Moller. 2018. The distribution of talent across contests. The Economic Journal 128 (609): 471–509. https://doi.org/10.
1111/ecoj.12426
Baker, G., M. Gibbs, and B. Holmstrom. 1994a. The wage policy of a firm. Quarterly Journal of Economics 109 (4): 921–955. https://
doi.org/10.2307/2118352
Baker, G., M. Gibbs, and B. Holmstrom. 1994b. The internal economics of a firm: Evidence from personnel data. Quarterly Journal of
Economics 109 (4): 881–919. https://doi.org/10.2307/2118351
Ball, S. B., M. H. Bazerman, and J. S. Carroll. 1991. An evaluation of learning in the bilateral winner’s curse. Organizational Behavior
and Human Decision Processes 48 (1): 1–22. https://doi.org/10.1016/0749-5978(91)90002-B
Bentler, P. M. 1995. EQS 6 Structural Equations Program Manual. Encino, CA: Multivariate Software.
Berger, L., K. J. Klassen, T. Libby, and A. Webb. 2013. Complacency and giving up across repeated tournaments: Evidence from the
field. Journal of Management Accounting Research 25 (1): 143–167. https://doi.org/10.2308/jmar-50435
Bosch-Domenech, A., J. G. Montalvo, R. Nagel, and A. Satorra. 2002. One, two, three, infinity, . . .: Newspaper and lab beauty-contest
experiments. American Economic Review 92 (5): 1687–1701. https://doi.org/10.1257/000282802762024737

The Accounting Review


Volume 93, Number 4, 2018
146 Cardinaels, Chen, and Yin

Bowlin, K. O., J. Hales, and S. J. Kachelmeier. 2009. Experimental evidence of how prior experience as an auditor influences managers’
strategic reporting decisions. Review of Accounting Studies 14 (1): 63–87. https://doi.org/10.1007/s11142-008-9077-0
Brazel, J. F., J. Leiby, and T. H. Schaefer. 2017. When Do Rewards Encourage Professional Skepticism? Working paper, North Carolina
State University, The University of Georgia, and University of Missouri–Kansas City.
Brown, A. L., C. F. Camerer, and D. Lovallo. 2012. The review or not to review? Limited strategic thinking at the movie box office.
American Economic Journal: Microeconomics 4 (2): 1–26. https://doi.org/10.1257/mic.4.2.1
Brown, A. L., C. F. Camerer, and D. Lovallo. 2013. Estimating structural models of equilibrium and cognitive hierarchy thinking in the
field: The case of withheld movie critic reviews. Management Science 59 (3): 733–747. https://doi.org/10.1287/mnsc.1120.1563
Brown, C., J. H. Evans III, D. V. Moser, and A. Presslee. 2016. How Does Reducing Pay Dispersion Affect Employee Behavior? Working
paper, University of Pittsburgh. Available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id¼2713840
Brown, J. 2011. Quitters never win: The (adverse) incentive effects of competing with superstars. Journal of Political Economy 119 (5):
982–1013. https://doi.org/10.1086/663306
Bruggen, A., and M. Strobel. 2007. Real effort versus chosen effort in experiments. Economics Letters 96 (2): 232–236. https://doi.org/10.
1016/j.econlet.2007.01.008
Cabrales, A., G. Charness, and M. C. Villeval. 2011. Hidden information, bargaining power, and efficiency: An experiment. Experimental
Economics 14 (2): 133–159. https://doi.org/10.1007/s10683-010-9260-6
Camerer, C. 2003. Behavioral Game Theory: Experiments in Strategic Interaction. Princeton, NJ: Princeton University Press.
Camerer, C., T. H. Ho, and J. K. Chong. 2004. A cognitive hierarchy model of games. Quarterly Journal of Economics 119 (3): 861–898.
https://doi.org/10.1162/0033553041502225
Campbell, D. 2012. Employee selection as a control system. Journal of Accounting Research 50 (4): 931–966. https://doi.org/10.1111/j.
1475-679X.2012.00457.x
Card, D., A. Mas, E. Moretti, and E. Saez. 2012. Inequality at work: The effect of peer salaries on job satisfaction. American Economic
Review 102 (6): 2981–3003. https://doi.org/10.1257/aer.102.6.2981
Carr, M. D. 2011. Work hours and wage inequality: Evidence from the 2004 WERS. Journal of Socio-Economics 40 (4): 417–427.
https://doi.org/10.1016/j.socec.2010.10.007
Casas-Arce, P., and F. A. Martinez-Jerez. 2009. Relative performance compensation, contests, and dynamic incentives. Management
Science 55 (8): 1306–1320. https://doi.org/10.1287/mnsc.1090.1021
Cason, T. N., W. A. Masters, and R. M. Sheremeta. 2010. Entry into winner-take-all and proportional-prize contests: An experimental
study. Journal of Public Economics 94 (9-10): 604–611. https://doi.org/10.1016/j.jpubeco.2010.05.006
Chan, E. 2018. Promotion, relative performance information, and the Peter Principle. The Accounting Review 93 (3): 83–103. https://doi.
org/10.2308/accr-51890
Charness, G., and M. Dufwenberg. 2011. Participation. American Economic Review 101 (4): 1211–1237. https://doi.org/10.1257/aer.101.
4.1211
Charness, G., and P. Kuhn. 2007. Does pay inequality affect worker effort? Experimental evidence. Journal of Labor Economics 25 (4):
693–723. https://doi.org/10.1086/519540
Choi, J. W., A. Newman, and I. D. Tafkov. 2016. A marathon, a series of sprints, or both? Tournament horizon and dynamic task
complexity in multi-period settings. The Accounting Review 91 (5): 1391–1410. https://doi.org/10.2308/accr-51358
Chow, C. 1983. The effects of job standard difficulty and compensation schemes on performance: An exploration of linkages. The
Accounting Review 58: 667–685.
Cohan, P. 2012. Why stack ranking worked better at GE than Microsoft. Forbes (July 13). Available at: https://www.forbes.com/sites/
petercohan/2012/07/13/why-stack-ranking-worked-better-at-ge-than-microsoft/#20fa72763236
Crawford, V. P., M. A. Costa-Gomes, and N. Iriberri. 2013. Structural models of nonequilibrium strategic thinking: Theory, evidence, and
applications. Journal of Economic Literature 51 (1): 5–62. https://doi.org/10.1257/jel.51.1.5
Dahlback, O. 1990. Personality and risk-taking. Personality and Individual Differences 11 (12): 1235–1242. https://doi.org/10.1016/
0191-8869(90)90150-P
Damiano, E., L. Hao, and W. Suen. 2012. Competing for talents. Journal of Economic Theory 147 (6): 2190–2219. https://doi.org/10.
1016/j.jet.2012.09.002
Dohmen, T., and A. Falk. 2011. Performance pay and multidimensional sorting: Productivity, preferences, and gender. American
Economic Review 101 (2): 556–590. https://doi.org/10.1257/aer.101.2.556
Douthit, J. 2014. The Effect of Worker Skill as A Source of Productive Efficiency in An Incomplete Contract Environment. Working paper,
The University of Arizona.
Ehrenberg, R. G., and M. L. Bognanno. 1990. Do tournaments have incentive effects? Journal of Political Economy 98 (6): 1307–1324.
https://doi.org/10.1086/261736
Fershtman, C., and U. Gneezy. 2011. The trade-off between performance and quitting in high-power tournaments. Journal of the
European Economic Association 9 (2): 318–336. https://doi.org/10.1111/j.1542-4774.2010.01012.x
Fischbacher, U. 2007. z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics 10 (2): 171–178. https://
doi.org/10.1007/s10683-006-9159-4

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 147

Frederickson, J. R. 1992. Relative performance information: The effects of common uncertainty and contract type on agent effort. The
Accounting Review 67: 647–669.
Frederickson, J. R., and W. Waller. 2005. Carrot or stick? Contract framing and the use of decision-influencing information in a principal-
agent setting. Journal of Accounting Research 43 (5): 709–733. https://doi.org/10.1111/j.1475-679X.2005.00187.x
Freeman, R. B., and A. Gelber. 2010. Prize structure and information in tournaments: Experimental evidence. American Economic
Journal: Applied Economics 2 (1): 149–164. https://doi.org/10.1257/app.2.1.149
Gascoigne, J. 2013. Introducing Open Salaries at Buffer: Our Transparent Formula and All Individual Salaries. Available at: https://
open.bufferapp.com/introducing-open-salaries-at-buffer-including-our-transparent-formula-and-all-individual-salaries/
Goeree, J. K., and C. A. Holt. 2001. Ten little treasures of game theory and ten intuitive contradictions. American Economic Review 91
(5): 1402–1422. https://doi.org/10.1257/aer.91.5.1402
Goeree, J. K., C. A. Holt, and T. R. Palfrey. 2003. Risk averse behavior in generalized matching pennies games. Games and Economic
Behavior 45 (1): 97–113. https://doi.org/10.1016/S0899-8256(03)00052-6
Goldfarb, A., and M. Xiao. 2011. Who thinks about the competition? Managerial ability and strategic entry in U.S. local telephone
markets. American Economic Review 101 (7): 3130–3161. https://doi.org/10.1257/aer.101.7.3130
Hales, J. 2009. Are investors really willing to agree to disagree? An experimental investigation of how disagreement and attention to
disagreement affect trading behavior. Organizational Behavior and Human Decision Processes 108 (2): 230–241. https://doi.org/
10.1016/j.obhdp.2008.08.003
Hales, J., L. W. Wang, and M. G. Williamson. 2015. Selection benefits of stock-based compensation for the rank-and-file. The Accounting
Review 90 (4): 1497–1516. https://doi.org/10.2308/accr-50962
Hannan, R. L. 2005. The combined effect of wages and firm profit on employee effort. The Accounting Review 80 (1): 167–188. https://
doi.org/10.2308/accr.2005.80.1.167
Hannan, R. L., J. H. Kagel, and D. V. Moser. 2002. Partial gift exchange in an experimental labor market: Impact of subject population
differences, productivity differences, and effort requests on behavior. Journal of Labor Economics 20 (4): 923–951. https://doi.org/
10.1086/342894
Hannan, R. L., K. L. Towry, and Y. Zhang. 2013. Turning up the volume: An experimental investigation of the role of mutual monitoring
in tournaments. Contemporary Accounting Research 30 (4): 1401–1426. https://doi.org/10.1111/1911-3846.12006
Hannan, R. L., R. Krishnan, and A. H. Newman. 2008. The effects of disseminating relative performance feedback in tournament and
individual performance compensation plans. The Accounting Review 83 (4): 893–913. https://doi.org/10.2308/accr.2008.83.4.893
Harbring, C., and B. Irlenbusch. 2008. How many winners are good to have? On tournaments with sabotage. Journal of Economic
Behavior and Organization 65 (3-4): 682–702.
Harbring, C., and B. Irlenbusch. 2011. Sabotage in tournaments: Evidence from a laboratory experiment. Management Science 57 (4):
611–627. https://doi.org/10.1287/mnsc.1100.1296
Kachelmeier, S. J., and M. G. Williamson. 2010. Attracting creativity: The initial and aggregate effects of contract selection on creativity-
weighted productivity. The Accounting Review 85 (5): 1669–1691. https://doi.org/10.2308/accr.2010.85.5.1669
Kelly, K., and A. Presslee. 2017. Tournament group identity and performance: The moderating effect of winner proportion. Accounting,
Organizations and Society 56: 21–34. https://doi.org/10.1016/j.aos.2016.12.001
Kelly, K., A. Presslee, and A. Webb. 2017. The effects of tangible rewards versus cash rewards in consecutive sales tournaments: A field
experiment. The Accounting Review 92 (6): 165–185. https://doi.org/10.2308/accr-51709
Knauer, T., F. Sommer, and A. Wohrmann. 2017. Tournament winner proportion and its effect on effort: An investigation of the
underlying psychological mechanisms. European Accounting Review 26 (4): 681–702. https://doi.org/10.1080/09638180.2016.
1175957
Kuang, X. J., and D. V. Moser. 2009. Reciprocity and the effectiveness of optimal agency contracts. The Accounting Review 84 (5):
1671–1694. https://doi.org/10.2308/accr.2009.84.5.1671
Kwoh, J. 2012. ‘‘Rank and yank’’ retains vocal fans. Wall Street Journal (January 31). Available at: https://www.wsj.com/articles/
SB10001424052970203363504577186970064375222
Lawler, E. 2012. Pay secrecy: Why bother? Forbes (September 12). Available at: https://www.forbes.com/sites/edwardlawler/2012/09/12/
pay-secrecy-why-bother/#587ff89c6a60
Lazear, E. P., and S. Rosen. 1981. Rank-order tournaments as optimum labor contracts. Journal of Political Economy 89 (5): 841–864.
https://doi.org/10.1086/261010
Leuven, E., H. Oosterbeek, J. Sonnemans, and B. Van der Klaauw. 2011. Incentives versus sorting in tournaments: Evidence from a field
experiment. Journal of Labor Economics 29 (3): 637–658. https://doi.org/10.1086/659345
Libby, R., and M. G. Lipe. 1992. Incentives, effort, and the cognitive processes involved in accounting-related judgement. Journal of
Accounting Research 30 (2): 249–273. https://doi.org/10.2307/2491126
Lynch, J. 2005. The effort effects of prizes in the second half of tournaments. Journal of Economic Behavior and Organization 57 (1):
115–129. https://doi.org/10.1016/j.jebo.2003.10.005
Meidinger, C., J. L. Rulliere, and M. C. Villeval. 2003. Does team-based compensation give rise to problems when agents vary in their
ability? Experimental Economics 6 (3): 253–272. https://doi.org/10.1023/A:1026221318302

The Accounting Review


Volume 93, Number 4, 2018
148 Cardinaels, Chen, and Yin

Mittal, V., W. T. Ross, Jr., and M. Tsiros. 2002. The role of issue valence and issue capability in determining effort investment. Journal of
Marketing Research 39 (4): 455–468. https://doi.org/10.1509/jmkr.39.4.455.19122
Morgan, J., D. Sisak, and F. Vardy. 2016. The Ponds Dilemma. Working paper, University of California, Berkeley, Erasmus University
Rotterdam, and International Monetary Fund.
Muller, W., and A. Schotter. 2010. Workaholics and dropouts in organizations. Journal of the European Economic Association 8 (4):
717–743. https://doi.org/10.1111/j.1542-4774.2010.tb00538.x
Myerson, R. B. 1999. Nash equilibrium and the history of economic theory. Journal of Economic Literature 37 (3): 1067–1082. https://
doi.org/10.1257/jel.37.3.1067
Nalebuff, B. J., and J. E. Stiglitz. 1983. Prizes and incentives: Toward a general theory of compensation and competition. Bell Journal of
Economics 14 (1): 21–43. https://doi.org/10.2307/3003535
Newman, A. H., and I. D. Tafkov. 2014. Relative performance information in tournaments with different prize structures. Accounting,
Organizations and Society 39 (5): 348–361. https://doi.org/10.1016/j.aos.2014.05.004
Ochs, J. 1995. Games with unique, mixed strategy equilibria: An experimental study. Games and Economic Behavior 10 (1): 202–217.
https://doi.org/10.1006/game.1995.1030
Peterson, M. A. 2009. Estimating standard errors in finance panel data sets: Comparing approaches. The Review of Financial Studies 22
(1): 435–480.
Prendergast, C. 1999. The provision of incentives in firms. Journal of Economic Literature 37 (1): 7–63. https://doi.org/10.1257/jel.37.1.7
Ryckman, R. M., M. Hammer, L. Kaczor, and A. J. Gold. 1990. Construction of a hypercompetitive attitude scale. Journal of Personality
Assessment 55 (3/4): 630–639. https://doi.org/10.1080/00223891.1990.9674097
Rynes, S., B. Gerhart, and K. A. Minette. 2004. The importance of pay in employee motivation: Discrepancies between what people say
and what they do. Human Resource Management 43 (4): 381–394. https://doi.org/10.1002/hrm.20031
Scheier, M. F., C. S. Carver, and M. W. Bridges. 1994. Distinguishing optimism from neuroticism (and trait anxiety, self-mastery, and
self-esteem): A reevaluation of the life orientation test. Journal of Personality and Social Psychology 67 (6): 1063–1078. https://
doi.org/10.1037/0022-3514.67.6.1063
Schotter, A., and K. Weigelt. 1992. Asymmetric tournaments, equal opportunity laws, and affirmative action: Some experimental results.
Quarterly Journal of Economics 107 (2): 511–539. https://doi.org/10.2307/2118480
Simonsohn, U. 2010. eBay’s crowded evenings: Competition neglect in market entry decisions. Management Science 56 (7): 1060–1073.
https://doi.org/10.1287/mnsc.1100.1180
Stahl, D. O. II, and P. W. Wilson. 1994. Experimental evidence on players’ models of other players. Journal of Economic Behavior and
Organization 25 (3): 309–327. https://doi.org/10.1016/0167-2681(94)90103-1
Tafkov, I. D. 2013. Private and public relative performance information under different compensation contracts. The Accounting Review
88 (1): 327–350. https://doi.org/10.2308/accr-50292
Towers Perrin. 2003. Working Today: Understanding What Drives Employee Engagement. The 2003 Towers Perrin Talent Report.
Available at: http://www.keepem.com/doc_files/Towers_Perrin_Talent_2003(TheFinal).pdf
Waller, W. S. 1985. Self-selection and the probability of quitting: A contracting approach to employee turnover in public accounting.
Journal of Accounting Research 23 (2): 817–828. https://doi.org/10.2307/2490839
Waller, W. S., and C. W. Chow. 1985. The self-selection and effort effects of standard-based employment contracts: A framework and
some empirical evidence. The Accounting Review 60: 458–476.
Weston, R., and P. A. Gore, Jr. 2006. A brief guide to structural equation modeling. Counseling Psychologist 34 (5): 719–751. https://doi.
org/10.1177/0011000006286345

APPENDIX A
Game Theoretical Proof

Step 1: The Choice of Tournament by Low-Ability Employees


We compare the payoff of low-ability employees between entering the large-spread tournament and the small-spread
tournament. It is easy to see that in the large-spread tournament, low-ability employees earn more when competing with low-
ability peers than when competing with high-ability peers. The payoff of low-ability employees when competing with low-
ability peers in the large-spread tournament can be calculated as follows:
D 2
pL ¼ D  ProbðeL  el . eL  el Þ þ  ProbðeL  el ¼ eL  el Þ  3eL 14 þ M ð1Þ
2
D is the prize spread (winner prize minus loser prize). pH, ei, and eH are the payoff, the random noise, and the effort level of
either high-ability employees (i ¼ H) or low-ability employees (i ¼ L), respectively. M is the loser prize. The effort level and
payoff in the equilibrium are derived from the first-order condition, as follows:

The Accounting Review


Volume 93, Number 4, 2018
Leveling the Playing Field: The Selection and Motivation Effects of Tournament Prize Spread Information 149

41 3e 41
D  L=7 ¼ 0; so eL ¼ D ð2Þ
882 378
When D ¼ 600 (large  spread), eL ¼ 30; pL ¼ 307. So the payoff to low-ability employees in the large-spread tournament is no
more than 307.
When low-ability employees choose the small-spread tournament, they are likely to compete with either high-ability
employees or low-ability ones:
1. If they compete with high-ability employees, then a Nash equilibrium is that high-ability employees choose 6 and low-
ability employees choose 1. We can verify the equilibrium by examining whether both parties have incentives to
deviate. If the high-ability employees increase effort by 1 (Effort ! 7), then this deviation increases the probability of
winning by 7.5/441 and, thus, an additional gain of 2 (100  7.5/441), which is less than the additional cost of effort
from 6 to 7. On the other hand, if the high-ability employees reduce effort by 1 (Effort ! 5), then this deviation reduces
the probability of winning by 16.5/441, which corresponds to a loss of 4. So the high-ability employees will neither
increase nor decrease effort. Similarly, we can verify that the low-ability employees will neither increase nor decrease
effort. Given the effort choice, the payoff to the low-ability employees is 452.
2. If the low-ability employees compete with other low-ability employees, then according to Formula (2), the effort at
equilibrium is eL ¼ 11, and the corresponding payoff is pL ¼ 474.
So the payoff of low-ability employees in the large-spread tournament is no more than 307, while the payoff of low-ability
employees in the small-spread tournament is at least 452. Comparing these two payoffs, low-ability employees choose the
small-spread tournament.

Step 2: The Choice of Tournament by High-Ability Employees


Following the choice of low-ability employees, high-ability employees will compete with high-ability employees if they
choose the large-spread tournament.28 The equilibrium effort and payoff are calculated similarly as above:
D 2
pH ¼ D  ProbðeH  eh . 3eH  3eh Þ þ  ProbðeH  eh ¼ 3eH  3eh Þ  3  eH 14 þ L ð3Þ
2
When D ¼ 600 (large  spread), eH ¼ 30; pH ¼ 307.
When high-ability employees choose the small-spread tournament, they again can earn at least 450 (when they choose the
minimum effort). Therefore, high-ability employees also choose the small-spread tournament.

APPENDIX B
Cost of Effort
Effort Effort Effort Effort Effort Effort
Levels Costs Levels Costs Levels Costs
1 0 11 26 21 95
2 1 12 31 22 104
3 2 13 36 23 113
4 3 14 42 24 123
5 5 15 48 25 134
6 8 16 55 26 145
7 11 17 62 27 156
8 14 18 69 28 168
9 17 19 77 29 180
10 21 20 86 30 193
The formula for the cost of effort is Cost ¼ 3 3 e2/14 (e is the level of effort).

28
In our experiment, if a participant is the only one selecting the large-spread tournament or the small-spread tournament, he or she will be paid $7 (¼ 210
points) to leave. Thus, we also assume that if only one participant selects the large-spread tournament, then this participant earns 210 points. This
assumption guarantees that selecting the large-spread tournament generates a payoff of not more than 307, even if only one participant selects it.

The Accounting Review


Volume 93, Number 4, 2018
Copyright of Accounting Review is the property of American Accounting Association and its
content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder's express written permission. However, users may print, download, or email
articles for individual use.

You might also like