You are on page 1of 38

In a Performance Measurement System: Is It

Job-Relevant Information or Its Decision-Making Use


That Leads to Better Performance?

By
Laurie Burney McWhorter*
Assistant Professor of Accounting
Mississippi State University
E-mail: LMcWhorter@cobilan.msstate.edu
And
C. Michele Matherly
Assistant Professor of Accounting
University of North Carolina at Charlotte
Email: Matherly@uncc.edu

Please do not quote without permission.

* Contact Author:
School of Accountancy
College of Business and Industry
Mississippi State University
P.O. Box EF
Mississippi State, Mississippi 39762
Office: (662) 325-1637
Fax: (662) 325-1646

In a Performance Measurement System: Is It


Job-Relevant Information or Its Decision-Making Use
That Leads to Better Performance?
Abstract
Prior budgeting studies building on Kren (1992) have consistently found that participation
improves individual performance, and that this relation can be partially explained by the
increased availability of job-relevant information (JRI). However, anecdotal evidence in the
strategic performance measurement literature suggests that having measures available (data)
does not necessarily mean that they are being used for decision-making (information). To
explore this distinction, we investigate whether JRIs relation to performance is explained by the
measures use in decision-making. We collected our data through a survey of financial
managers, yielding a 50% response rate. Based on 698 usable responses, we successfully
replicate Krens (1992) findings in a performance measurement system context. More
importantly, we find that informations decision-making use partially mediates the JRIPerformance relation. Overall, our results imply that organizations should make a concerted
effort to increase the use of relevant measures to fully benefit from having a strategic
performance measurement system.

Key Words: participative decision-making, job-relevant information, performance,


decision-making use, strategic performance measurement systems

Data Availability: Due to confidentiality issues, data beyond what is presented in the
paper cannot be made available.

i. Introduction
As information systems, performance measurement (PM) systems transform
performance measures (the inputs) into assessments of both organizational- and
individual-level performance (the outputs). PM systems exist along a continuum from
traditional to strategic performance measurement systems (SPMSs), such as the
Balanced Scorecard. While traditional PM systems focus primarily on financial
outcomes, SPMSs contain both financial and nonfinancial measures that are selected to
communicate and reflect organizational strategy (IMA, 1999; Ittner et al., 2003).1 Due
to positive reports of their use,2 the number of organizations implementing SPMSs has
grown rapidly in recent years with approximately 50 percent of Fortune 1000 companies
using them (Gumbus and Lyons 2002). Therefore, research investigating SPMSs is
needed to improve our understanding of the benefits and weaknesses of these systems,
and help increase the effectiveness of their use.
As PM systems evolve from traditional to SPMSs, accountants participate in the
systems design and maintenance. Based on our review of the literature, accountants
participation in system design has only been examined in a budget setting. These
studies consistently find that participation is positively associated with performance
(Luft and Shields 2003), and that this relation is mediated by job-relevant information
(Kren, 1992; Chong and Chong 2002). However, neither the participation-performance
relation nor the mediation has been explored in a PM system context. Yet, one purpose
of PM systems is to influence behavioral and work outcomes. Thus, the opportunity
exists for performance improvements through participation in the PM system.

While the budgeting literature documents that increased availability of jobrelevant information leads to better performance, the information overload literature
suggests that information is more than just having access to data. Data only becomes
information once it is understood and used (Meadow and Yuan 1997). In a PM system
context, an SPMS serves as a communication mechanism, which enhances an
individuals ability to understand organizational objectives and select actions needed for
success. Based on the communication aspect of an SPMS and the information overload
literature, we propose that the relation between job-relevant information and individual
job performance may be partially explained by the extent to which the information is
actually used for decision-making.
The primary objective of this study is to examine the mediating effects of
information use in a PM system environment. Specifically, this study develops a model
of the expected relations among participative decision-making, the availability of jobrelevant information, individual performance, and the decision-making use of
performance measures (see Figure 1). We examine this model based on survey
responses from 698 financial managers (50% response rate) and test for mediation
using structural equation modeling. First, we provide evidence that Krens (1992)
model, where job-relevant information mediates the participation-performance relation,
holds in a PM system context. These results demonstrate the robustness of Krens
model outside of a budgeting environment. More importantly, we find that the link
between job-relevant information and performance can be partially explained by the
extent to which information is used in decision-making. These findings suggest that the

use of performance measures, not just their availability, enables individuals to achieve
higher levels of performance.
The remainder of the paper consists of six sections. The first section
differentiates between traditional PM systems and SPMSs. The second section
discusses the relevant literature and develops the hypotheses. The third section
describes the sample and research method. The remaining three sections contain the
results, discussion and conclusion.
ii. Performance Measurement Systems
In its simplest form, a system takes inputs, transforms them into outputs, and
then moves into a feedback loop. Accounting information systems typically consist of
inputs in the form of data that are converted into information useful for decision-making
related to planning, monitoring and controlling (Firmin and Linn 1968; ODonnell and
David 2000). With a performance measurement (PM) system, the inputs are measures
that reflect the current state of the organization. Periodically, individuals convert these
performance measures into an assessment of organizational and/or individual
performance. The assessment process determines whether performance is consistent
with expectations. Like other systems, PM systems contain a single feedback loop that
occurs as individuals make decisions based on trends and changes in the measures.
Then, they initiate any necessary adjustments to the processthereby, forming the
loop back to the input stage of the process.
Typically, a PM system lies on a continuum between traditional systems and
strategic performance measurement systems (SPMSs), such as balanced scorecards or

key performance indicators. One differentiating characteristic between the two


extremes is the choice of inputs (performance measures). A traditional PM system
emphasizes financial measures of performance, while an SPMS combines both financial
and nonfinancial measures. SPMSs are developed through a filtering process, where
measures are chosen to reflect organizational strategy (Gates 1999). In this way, an
organization communicates information about the long term strategy, the relations
among the various strategic objectives, and the association between the employees
actions and the chosen strategic goals (Ittner and Larcker 1998, p. 223). As a result of
this communication, individuals are motivated to pursue actions necessary to achieve
organizational goals and objectives (Kaplan and Norton 1996).
Another feature that distinguishes traditional from strategic PM systems is the
feedback loop. While traditional systems include the single feedback loop already
described, SPMSs involve a double feedback loop process. During the second loop, the
chosen set of performance measures is evaluated against the organizations strategy to
verify that both the measures as well as the strategy continue to be valid and viable
(Kaplan and Norton 1996).
Recently, several studies have investigated SPMSs.3 From a behavioral
perspective, most research focuses on how SPMSs impact organizational-level outcomes
or division/individual performance evaluation processes.4 Exceptions include the studies
by McWhorter (2003) and McWhorter et al. (2003) that examine SPMSs at the individual
level in relation to the employee outcomes of perceptions of justice, role stress, job
satisfaction and job performance. The need for this research stems from the fact that

an organizations long-term success largely depends on the actions of individuals (Otley


1999; de Haas and Kleingeld 1999). This study maintains the focus on individuals.
iii. Literature And Hypotheses Development
In this section, we review the literature on the relation between participative
decision-making (PDM) and performance; in particular, we focus on the mediating role
of job-relevant information (JRI). We then apply this relationship to a PM system
context. Next, we explain why and how the model could be extended to include
decision-making use (DM Use) as a mediator of the JRI-Performance link. Figure 1
presents the relations examined in this study.

Insert Figure 1 About Here

1. Participative Decision-Making, Performance, Job-Relevant Information


Participation in a budget setting is one of the most researched management
accounting topics (Sheely and Brown 2004). This stream of research builds upon the
psychology and management literatures, which generally support the idea that better
behavioral outcomes result when employees participate in organizational functioning
(Lau and Tan 2003).
The predicted performance benefits of participation are grounded in expectancy
theory.5 According to this theory, individuals pursue rewards by expanding their
participation in organizational operations and gain additional insight regarding jobrelated issues, such as goals and strategies (Magner et al., 1996). This involvement
also allows individuals to influence their work environment (Spector, 1986; Miller and

Monge 1986). Consistent with these expectations, prior research has shown that PDM
enables managers to enhance their performance and obtain work-related rewards (Glew
et al. 1995).
More recent budgeting studies have investigated the cognitive mechanisms
affecting the relationship between PDM and employee outcomes. These studies have
explored the influence of a variety of potential mediators, including role ambiguity
(Chenhall and Brownell 1988), job difficulty (Mia 1989), job standardization (Brownell
and Merchant 1990), and job-relevant information (JRI) (Kren 1992), which is the
mediator of interest in our model.
JRI is the quantity of task-oriented information that individuals have available for
completing job responsibilities (Lau and Tan 2003). Lau and Tan (2003, p. 23)
conclude that by having access to JRI, individuals can improve the quality and
effectiveness of their task. JRI in a PM system can be classified into two categories:
decision-influencing and decision-facilitating. Decision-influencing information is used
to evaluate an individuals performance (Kren, 1992, p. 512). In a PM system, JRI is
acquired through the measures that are used to assess performance. Thus, JRI is
consistent with the decision-influencing definition. Alternatively, decision-facilitating
information enables an individual to improve his or her action choice through betterinformed effort (Kren, 1992, p. 512). A PM system communicates information that
increases an individuals understanding of organizational objectives. This understanding
enables more informed decisions, which makes JRI in a PM system also decisionfacilitating.6

Our study builds on Kren (1992), who found that JRI partially mediates the
positive relation between budgetary PDM and performance. According to Kren, PDM
increases the time that individuals spend thinking about their decisions and encourages
them to acquire JRI. Several researchers have investigated the PDM-JRI portion of
Krens (1992) findings (Magner et al., 1996; Chong and Chong 2002; Sheely and Brown
2004; Lau and Tan 2003). These studies consistently find that JRI mediates the
relation between PDM and outcomes, such as employee performance and job
satisfaction. However, this relation has not been examined in accounting outside of a
budgeting setting. Therefore, the first objective of our study is to investigate whether
Krens model holds in a PM system context. Consequently, we hypothesize the
following:

H1: The positive relation between Participative Decision-Making (PDM) and


Individual Performance (PERF) will be mediated by Job-Relevant
Information (JRI).
2. Decision-Making Use (DM Use)
The information overload literature provides additional insight into the JRIPerformance relation. This research reveals that increasing information up to a certain
point leads to better decision-making (see Edmunds and Morris 2000 for a review).
However, information loads beyond that point decrease decision-making performance
(Eppler and Mengis 2002). Furthermore, OReilly (1980, p. 686) recommends that an
organization provide sufficient amounts of relevant information to decision makers
while minimizing the amount of irrelevant information. However, research also
differentiates between information load and data load (Iselin 1993). Meadow and Yuan

(1997, p. 19) contend that to actually be information the messages have to have been
received and understood or appraised (emphasis added). This literature suggests that
the provision of JRI is not necessarily what leads to improved performance. Rather, the
informations use for decision-making (DM Use) may explain the positive relation
between JRI and performance.
As described earlier, PM systems vary in the quantity and diversity of
performance measures (i.e., data) captured. SPMSs place more reliance on
nonfinancial performance measures, which supplement the financial measures found in
traditional PM systems. With SPMSs, organizations avoid having a fad of the month
collection of measures because the PM system evolves from a deliberate process of
selecting measures linked to organizational strategy. Through this linkage, an SPMS
communicates a common strategy, enabling individuals to better understand not only
the measures, but how their actions impact outcomes. Therefore, individuals are more
likely to use the performance measures, transforming them from data to information.
In contrast, anecdotal evidence suggests that many SPMSs contain measures
that are poorly defined and/or suggest inconsistent action choices (Schneiderman,
1999; Venkatraman and Gering 2000). For instance, a survey of IMA members
reported that SPMS users had more confidence in their PM systems financial
performance measures than the nonfinancial ones (Frigo and Krumwiede 1999). Even
within the nonfinancial categories, respondents viewed the less common measures,
such as employee outcomes and information systems capabilities, as much less
effective. Innovation and growth measures received the lowest overall ratings, with the

majority of respondents classifying them as Less than Adequate-Poor. This evidence


suggests that the availability of relevant information does not necessarily mean that it
will be used for decision-making. Instead, the availability of relevant but unreliable
information may reduce the extent to which JRI is used (DM Use).7
The information overload literature suggests that DM Use may mediate the JRIPerformance relation. However, within PM systems, the relation between JRI and DM
Use is unclear because of the conflict that arises from communicating strategy through
potentially unreliable measures. Therefore, to test whether the expected increase in
DM Use outweighs the problem of measure reliability, we hypothesize the following:

H2: The positive relation between Job-Relevant Information (JRI) and individual
Performance (PERF) will be mediated by Decision-Making Use (DM Use).
iv. Sample And Research Method

1. Sample Selection
A survey of Institute of Management Accountants (IMA) members was used to
collect data for this study. Prior to the surveys mailing, the instrument was pilot tested
by several colleagues and business/accounting managers, and modifications were made
based on their feedback. The IMA sample consists of a randomly generated list of
1,524 individuals who report on their annual dues statement a job title of manager or
above. The initial mailing also included a reply card, which was returned separately,
asking respondents to mark whether the survey was completed. Using these reply
cards, a second survey was mailed to 828 individuals after approximately four weeks.8
We received 763 responses from financial managers, resulting in a 50 percent
response rate.9 However, we eliminated 65 responses because the individuals failed to

10

answer several questions needed to test the hypotheses. Therefore, our final sample
consists of 698 usable responses.10 On average, the respondents in our sample have
been employed by their current organization for 10 years and have been a manager for
13 years. To test for nonresponse bias, we compared the gender and industry of early
and late respondents using Chi-square tests. We also performed independent samples

t-tests for differences between the means of the independent variables. The results of
these two tests indicate that a systematic bias due to nonresponse is unlikely.

2. Measurement of Constructs
This section describes how the survey measured the constructs represented in
the model. For participative decision-making (PDM) and job-relevant information (JRI),
we used established scales. We developed scales to measure individual performance
(PERF) and decision-making use (DM Use). For all constructs, higher scores represent
higher levels for that scale (see the Appendix for individual scale items).

3. Participative Decision-Making (PDM)


The participative decision-making (PDM) scale reflects the respondents
involvement in their PM system development. PDM is usually evaluated on a continuum
between no involvement to high consultation. A three-item budgetary participation
index, developed by Kren (1992), was modified to assess participation in the PM
systems development.

11

4. Job-Relevant Information (JRI)


Similar to other accounting studies (Magner et al., 1996; Chong and Chong
2002), we used Krens three-item JRI scale. This scale asks respondents to indicate the
extent to which the information necessary to adequately perform their job is available.

5. Individual Performance (PERF)


This scale asks respondents to provide their most recent performance evaluation
for the following managerial functions: strategic planning, operational planning,
directing and decision-making.11 This scale also includes a fifth item designed to
capture overall effectiveness.
We developed this scale for three reasons. First, a measure of managerial
performance must incorporate significant dimensions of managerial activity or
behavior (Heneman, 1974, p. 639). As recommended by Heneman (1974), this scale
is based on broad managerial functions so that the items are meaningful, but also
general enough to apply in a cross-sectional survey. Second, self-report measures
provide respondents with anonymity, which is important for soliciting honest
responses.12 Finally, this scale asks respondents to anchor on an independent
evaluation rather than providing a self-evaluation. This anchor reflects how superiors
evaluated the respondents performance. Therefore, these evaluations assess whether
the respondents actions are consistent with those desired by the organization.

6. Decision-Making Use (DM Use)


To measure DM Use, respondents reported the extent to which their decisionmaking incorporated performance measures across the following eight categories:

12

financial, customer, product/service quality, operational performance, innovation in


processes, employee outcomes, information systems capabilities, and organizational
procedures.
Two factors influenced the development of this scale. First, broad categories of
performance measures were necessary to accommodate a cross-sectional survey.
Second, we selected these specific categories to reflect the breadth of measures found
in PM systems across the spectrum from traditional systems to SPMSs.
v. Results
Prior to data analyses, we screened the responses for significant missing values.
In addition to the respondents who did not answer a large portion of the survey, we
noted a high incidence of missing responses in two items of the PERF scale (percentage
of respondents: 33% for strategic planning and 20% for operational planning). On that
basis, we removed these two items from further analyses, leaving three items in the
performance scale. For the other scales, all planned items were retained for scale
evaluation.
Next, we evaluated the four scales in the theoretical model using two criteria
unidimensionality and reliability. Unidimensionality was first assessed using
confirmatory factor analysis (CFA) to verify that a single factor solution exists. One
item planned for the DM Use scale (the financial category) failed to load on any factor
at an initial cutoff of 0.30 and was eliminated from further analyses. Two items
(customer and product/service quality) loaded in a separate scale, resulting in two DM
Use scales (customer-related categories and a grouping of less common nonfinancial

13

measures). Nunnally (1978) recommends that factor scores of approximately 0.50 and
greater are valid for inclusion in the final factor solution and are satisfactory for
interpretation of the factor. As shown by the factor scores in the Appendix, all five
scales were unidimensional.
The second criterion of reliability, or internal consistency, is sufficiently
demonstrated with a Cronbachs alpha of 0.70 and above (Nunnally 1978). As shown
along the diagonal in Table 1, all scales resulted in a Cronbachs alpha exceeding this
threshold.

Insert Table 1 About Here

Table 1 also reports the intra-correlation coefficients. The Cronbachs alpha for
each construct is greater than the correlation coefficients in the same column. Because
a higher correlation exists within the construct than between measures, discriminant
validity is indicated. Even so, significant correlations were found for all of the relations
examined in the model. Since these significant correlations may indicate the presence
of multicollinearity, the parameter estimates were examined to ensure that the items
loaded on the expected variables (Viator 2001). For all of the scale items, these
estimates were significant. Also, the standard errors, which range from 0.038 to 0.078,
appear reasonable. Finally, the variance inflation factors were computed and none
exceed the guideline of 10 recommended by Dielman (1996). Therefore, no evidence
of significant multicollinearity exists.

14

Descriptive statistics related to the five scales appear in Table 2. We derived the
models variables by averaging the items in each scale. All of the individual scale
means fall between three and six.

Insert Table 2 About Here

1. Structural Equation Modeling


The theoretical model was tested with AMOS structural equation modeling (SEM)
software following the guidelines specified by Anderson and Gerbing (1988) and Kline
(1998). First, a confirmatory factor analysis (CFA) was completed using SEM software
to evaluate whether the observed variables correspond to the expected latent factors
included in the measurement model (Byrne 2001). Second, the CFA results served as
the basis for the structural model, which specified the expected regression relations
among the latent factors (Byrne 2001).13
As part of the CFA analysis, we also examined potential misspecification by
computing the modification indices, which depict the expected decrease in the models
overall Chi-square for each parameter if it is freely estimated (Byrne 2001).14 A review
of these indices reveals a high level of crossloadings between items in the two DM Use
scales. To eliminate this misspecification, we removed the DM UseCustomer construct
from the model.15

2. Analysis of the Measurement Model


We assessed the models fit using three measures: the Chi-square divided by the
model degrees of freedom (CMINDF), the comparative fit index (CFI) and the root

15

mean square error of approximation (RMSEA). In addition, the sample size was
evaluated using Hoelters critical N.
Joreskog (1969) recommends the CMINDF ratio as being more appropriate than
the Chi-square test for large sample sizes. To derive the CMINDF, the models Chisquare is divided by its degrees of freedom (Medsker et al. 1994). A ratio of
approximately five or less is considered reasonable (Wheaton et al. 1977).
Bentler (1990) developed the CFI. This index considers the sample size when
comparing the measurement model to an independence model that assumes no
correlation between the observed variables (Byrne 2001). CFI values range between 0
and 1, with values approaching 1 reflecting a very good fit (Arbuckle and Wothke
1999). The general cutoff for a well-fit model is 0.95 (Byrne 2001).
The RMSEA also compares the measurement model to an independence model,
but relies on population discrepancy function, which fits the model to the population
moments instead of the sample moments (Arbuckle and Wothke 1999). This measure
compensates for any effect resulting from the models complexity. A good fit is
represented by a value closer to zero, with a guideline of 0.08 or less indicating a
reasonable fit (Browne and Cudeck 1993).
The adequacy of the sample size is evaluated by Hoelters critical N (Byrne
2001), which estimates the sample size needed to generate an acceptable model.
AMOS provides a Hoelters critical N value at two levels of significance, 0.01 and 0.05.
The measurement model has a CMINDF of 1.496 (standard of 5), CFI of 0.992
(standard of 0.95) and RMSEA of 0.027 (standard of 0.08). For our study, the

16

critical N values were 669 (0.01 level) and 603 (0.05 level), both less than our 698
usable responses. Since these measures suggest a good fit and a sufficient sample
size, this model was used to evaluate the hypotheses.

3. Analyses of Mediation Hypotheses


With mediation, three latent variables are involved: (1) the predictor variable (A),
(2) the hypothesized mediator (B) and (3) the latent dependent variable (C). For each
hypothesized mediator, we followed the guidelines described by Holmbeck (1997) for
examining mediation relations using SEM. These guidelines require a series of analyses
that begins by testing the direct link between A and C to assess the models fit. If good
fit is achieved, the overall relation (A B C), as well as the coefficients for the A
B and B C links, are estimated. Further analysis assumes that all of these predicted
links are significant in the expected directions. As a final step in the mediation analysis,
two models are estimated, Model A which constrains the A C path to a value of zero
and Model B which leaves the A C path unconstrained. AMOS SEM software
computes a comparison of the two models and provides a Chi-square difference test.
According to Blanthorne et al. (2004), partial mediation is indicated if the model
improves when including the A C path. However, if no improvement is noted in the
model, then complete mediation is supported.

4. Hypothesis 1
In Hypothesis 1, we predict that JRI (B) mediates the relation between PDM (A)
and PERF (C), as found in previous research. Table 3 presents the results of these
analyses. As seen in Panel A, the PDM-PERF link is significant (p-value = 009). The

17

sample size is greater than the Hoelters critical N at both the 0.01 level (N = 476) and
the 0.05 level (N = 430). This model also exceeds all of the benchmarks for assessing
goodness of fit (CMINDF = 1.179, CFI = 0.999, RMSEA = 0.016). In addition, as
shown in Panel A, the separate analyses of the A B C model and each individual
path produce significant coefficients.
The final step in the mediation analyses requires the calculation of a Chi-square
comparing models with and without the A C path (see Table 3, Panel B). This
comparison reports a difference in Chi-square of 0.023, with a change in df of 1. The
probability of obtaining this result is likely (p-value= 0.880). Thus, the comparison
indicates that including the direct link between A (PDM) and C (PERF) does not improve
the model. Therefore, support is given for Hypothesis 1 that JRI (B) fully mediates the
PDM (A) PERF (C) relation.

Insert Table 3 About Here

5. Hypothesis 2
Hypothesis 2 investigates whether DM Use (B) mediates the relation between JRI
(A) and PERF (C). The results of these analyses are shown in Table 4. The goodness
of fit measures for this A C model are within the recommended guidelines (CMINDF
= 4.968, CFI = 0.949, RMSEA = 0.075) and the sample size is sufficient (Hoelters
critical N of 221 at the 0.01 level and 194 at the 0.05 level). All coefficients in the
separate analyses of the A B C model and each individual path are significant (see
Panel A). Finally, we review the Chi-square test to compare the model with and without

18

the A C path. As shown in Panel B, the difference in Chi-square is 15.763, with a


change in df of 1. Since obtaining this result is unlikely (p-value < 0.000), these
findings indicate that decision-making use partially mediates the relation between JRI
and PERF. Thus, Hypothesis 2 is supported.

Insert Table 4 About Here

6. Analysis of the Structural Model


Figure 2 presents the results of the structural model, which includes all of the
predicted relations. Even when combined in this full model, our overall findings from
the separate mediation analyses persist. As shown in Figure 2, all of the links are
significant at a p-value of 0.000, except for the PDM-PERF link (p-value = 0.676) and
the DM Use-PERF link (p-value = 0.060).

Insert Figure 2 About Here

7. PDM-PERF Link
As noted in the analyses of Hypothesis 1, the PDM-PERF link is significant when
tested in isolation (see Panel A of Table 3, p-value = 0.009). However, the relation is
fully mediated by JRI. This result suggests that the PDMJRIPERF model (Model A)
provides a richer explanation of the relations than a model that considers a direct link
between PDM and PERF (Model B). Furthermore, the full model shown in Figure 2 also
includes the influence of DM Use, which partially mediates the JRI-PERF relation.
Having both of these variables in the full model detracts from the significance of the

19

direct link between PDM-PERF. Hence, it is not surprising that the PDM-PERF link is
insignificant in the full model presented in Figure 2.

8. DM Use-PERF Link
The results for Hypothesis 2 indicate that DM Use acts as a partial mediator of
the JRI-PERF relation. The model limited to DM Use and PERF is significant at a p-value
= 0.000 (see Panel A of Table 4). However, this link only becomes marginally
significant when the direct link between JRI and PERF is added (see Panel B of Table
4). Overall, our findings show that both DM Use and JRI are associated with improved
performance. Consequently, the marginally significant result for the DM Use-PERF link
in Figure 2s full model is consistent with these findings.
vi. Discussion
Our study makes two important contributions. First, we demonstrate that Krens
(1992) JRI mediation model, which previously had been tested only in budgeting, also
works in a PM system environment. Second, once establishing that it holds in this
environment, we expand Krens model by including DM Use as a mediator of the JRIPerformance link.
For the first hypothesis, we successfully replicated Krens model within a PM
system context. Therefore, our study provides additional evidence regarding the
generalizability of his results. Similar to the budget environment, participation in PM
system development appears to enhance job performance through the provision of jobrelevant information. These findings imply that organizations can improve employee,
and ultimately organizational, performance by encouraging participation.

20

More importantly though is the additional finding of decision-making use as a


mediator of the JRI-Performance path. We find that the decision-making use of less
common, nonfinancial measures partially mediates this relation. This evidence suggests
that organizations can obtain performance gains when relying on these newer, less
common performance measures, which are more often associated with SPMSs.
Finally, our results support the information overload contention that just
providing data will not necessarily lead to performance improvements. Instead, all
performance measures, especially the less common ones, need to have a clearly
communicated purpose and be perceived as both relevant and reliable, so that
managers will convert the data into information used for decision-making. One
mechanism that could be used to accomplish these goals is adequate training. Without
training, managers may perceive the PM system measures as less useful and discount
them when making decisions. As a result, an organization may fail to gain the full
benefits of the system.
While our research contributes to understanding the models relations, several
avenues of future research exist. First, this study adds a new variable to Krens (1992)
model, which would be useful to evaluate in other contexts. Second, additional
research needs to identify methods for converting relevant data into information so that
managers can factor it into their decision-making. Another worthwhile area to explore
is whether training encourages the use of relevant information.
This study, as with any survey, is affected by various limitations that must be
considered when interpreting the results. First, we collected the data through a cross-

21

sectional, convenience sample of financial managers, which may not be generalizable to


a different group of respondents. Second, we focus only on a subset of factors that
may impact performance. Including other factors may provide more insight regarding
the models relations. Finally, similar to self-ratings, the individual performance items
may be biased by leniency error and restriction of range (see Furnham and Stringfield
1994 for a review). However, to mitigate these potential biases, we attempt to anchor
responses to supervisor evaluations.
vii. Conclusion
Much attention has been given to the positive effect of participation and jobrelevant information on performance in a budget setting. By applying these relations to
a PM system environment, our study provides insight into the generalizability of this
model. Moreover, we find that this model can be enhanced. Specifically, the extent to
which data is used in decision-making partially explains how relevant information leads
to improved performance. These findings have implications at both the individual and
organizational levels. To obtain performance improvements, individuals should make a
concerted effort to identify and use relevant facts. Furthermore, organizations should
encourage participation in system design, since they also benefit from the performance
gains of their employees.

22

APPENDIX
Scale Factor Loadings
Survey
Item #

Factor
Score

Item

Participative Decision-Making (PDM)

(1 = strongly disagree and 7 = strongly agree)

My organizations performance measurement system


D14

is not final until I am satisfied with it.

0.677

D17

is developed with consideration of my opinion as an important factor.

0.857

D22

is developed with my involvement being an important factor.

0.816

Job-Relevant Information (JRI)

(1 = strongly disagree and 7 = strongly agree)

F2

I am always clear about what is necessary to perform well on my job.

0.725

F4

I have adequate information to make optimal decisions to accomplish my


performance objectives.

0.859

F5

I am able to obtain the strategic information necessary to evaluate important


decision alternatives.

0.826

Individual Performance (PERF)

(1 = worst possible rating and 7 = best possible rating)

Listed below are categories of managerial functions. For each, please indicate how your performance
was rated on your most recent performance evaluation.
J1

Strategic planninga

--a

J2

Operational planning

---

J3

Directing

0.785

J4

Decision-making

0.797

J5

Overall rating

0.838

Decision-Making Use (DM Use)

(1 = used very little and 7 = used a great deal)


Listed below are categories of subunit performance measures. For each, please indicate the extent to
which you use information in that category to make decisions about your organizations business unit.

G1
G2

a
b

Financial outcomesb
Customer outcomes

b
b

Customer
Factor

Other
Factor

---

---

0.809

---

0.809

---

G3

Product/service quality

G4

Operational performance

0.618

G5

Innovation in processes

0.696

G6

Employee outcomes

0.641

G7

Information systems capabilities

0.695

G8

Organizational procedures

0.742

Items eliminated due to a large percentage of missing values.


Items eliminated from scale due to low factor loadings and for model specification.

23

REFERENCES
Anderson, J. C., and D. W. Gerbing, 1988, Structural equation modeling in practice: A
review and recommended two-step approach, Psychological Bulletin 103 (May),
411-423.
Arbuckle, J. L., and W. Wothke, 1999, AMOS 4.0 Users Guide, Chicago, IL: SmallWaters
Corporation.
Bentler, P. M., 1990, Comparative fit indexes in structural models, Psychological Bulletin
107 (March), 238-246.
Blanthorne, C., L. A. Jones-Farmer, and E. D. Almer, 2004, Dear reviewer number one:
Maybe I should have used SEM, but now its too late, Presented at the American
Accounting Associations Accounting, Behavior and Organizations Section
Research Conference and Case Symposium, Chicago, IL, October.
Browne, M. W., and R. Cudeck, 1993, Alternative ways of assessing model fit, In
Testing Structural Equation Models, edited by K. A. Bollen and J. S. Long, 136162, Newbury Park, CA, Sage Publications.
Brownell, P., 1995, Research Methods in Management Accounting, Coopers & Lybrand
Accounting Research Methodology Monograph No. 2, Melbourne, Australia,
Coopers & Lybrand.
Brownell, P., and K. A. Merchant, 1990, The budgetary and performance influences of
product standardization and manufacturing process automation, Journal of
Accounting Research 28 (Autumn), 388-397.
Bryant, L., D. A. Jones, and S. K. Widener, 2004, Managing value creation within the
firm: An examination of multiple performance measures, Journal of Management
Accounting Research 16, 107-131.
Byrne, B. M., 2001, Structural Equation Modeling with AMOS: Basic Concepts,
Applications, and Programming, Mahwah, NJ, Lawrence Erlbaum Associates.
Chenhall, R. H., and P. Brownell, 1988, The effect of participative budgeting on job
satisfaction and performance: Role ambiguity as an intervening variable,
Accounting, Organizations and Society 13 (3), 225-233.
Chong, V. K., and K. M. Chong, 2002, Budget goal commitment and informational
effects of budget participation on performance: A structural equation modeling
approach, Behavioral Research in Accounting 14, 65-86.

24

Cohen, J., and P. Cohen, 1983, Applied Multiple Regression/Correlation Analysis for the
Behavioral Sciences, Hillsdale, NJ, Lawrence Erlbaum Associates, Publishers.
Davis, S., and T. Albright, 2004, As investigation of the effect of Balanced Scorecard
implementation on financial performance, Management Accounting Research
(June) 15, 135-153.
de Haas, M., and A. Kleingeld, 1999, Multilevel design of performance measurement
systems: Enhancing strategic dialogue throughout the organization, Management
Accounting Research 10 (September), 233-261.
Dielman, T. E., 1996, Applied Regression Analysis for Business and Economics, Second
edition, Belmont, CA, Duxbury Press.
Dillman. D. A., 2000, Mail and Internet Surveys: The Tailored Design Method, New
York, NY, Wiley.
Edmunds, A., and A. Morris, 2000, The problem of information overload in business
organisations: A review of the literature, International Journal of Information
Management 20 (February), 17-28.
Eppler, M. J., and J. Mengis, 2002, The concept of information overload: A review of
literature from organization science, marketing, accounting, MIS, and related
disciplines, Retrieved January 1, 2005 from the NetAcademy on Knowledge
Media, Available at,
http://www.knowledgemedia.org/modules/pub/view.php/knowledgemedia-25
Firmin, P. A., and J. J. Linn, 1968, Information systems and managerial accounting, The
Accounting Review (January), 75-82.
Frigo, M. L., and K. R. Krumwiede, 1999, Balanced scorecards: A rising trend in
strategic performance measurement, Journal of Strategic Performance
Measurement 3 (February/March), 42-48.
Furnham, A., and P. Stringfield, 1994, Congruence of self and subordinate ratings of
managerial practices as a correlate of supervisor evaluation, Journal of
Occupational and Organizational Psychology (March), 57-67.
Gates, S, 1999, Aligning Strategic Performance Measures and Results, Number R-126199-RR, New York, NY, The Conference Board.
Glew, D. J., A. M. OLeary-Kelly, R. W. Griffin, and D. D. Van Fleet, 1995, Participation
in organizations: A preview of the issues and proposed framework for future
analysis, Journal of Management 21 (Fall), 395-421.

25

Gumbus, A., and B. Lyons, 2002, The balanced scorecard at Philips Electronics,
Strategic Finance (November), 45-49.
Heneman, H. B., 1974, Comparisons of self- and superior ratings of managerial
performance, Journal of Applied Psychology 59 (October), 638-642.
Holmbeck, G. N., 1997, Toward terminological, conceptual, and statistical clarity in the
study of mediators and moderators: Examples from the child-clinical and
pediatric psychology literatures, Journal of Consulting and Clinical Psychology 65
(August), 599-610.
Institute of Management Accountants (IMA), 1999, Counting More, Counting Less.
Transformations in the Management Accounting Profession, Montvale, NJ, IMA
Publications.
Iselin, E. R., 1993, The effects of the information and data properties of financial ratios
and statements on managerial decision quality, Journal of Business Finance &
Accounting 20 (January), 249-266.
Ittner, C. D., and D. F. Larcker, 1998, Innovations in performance measurement:
Trends and research implications, Journal of Management Accounting Research
10, 205-238.
______., ______., and T. Randall, 2003, Performance implications of strategic
performance measurement in financial services firms, Accounting, Organizations
and Society 28 (October-November), 715-741.
Joreskog, K. G., 1969, A general approach to confirmatory maximum likelihood factor
analysis, Psychometrika 34, 183-202.
Kaplan, R. S., and D. P. Norton, 1996, The Balanced Scorecard: Translating Strategy
into Action, Boston, MA, Harvard Business School Press.
Kline, R. B., 1998, Principles and Practice of Structural Equation Modeling, New York,
NY, Guilford Publications.
Kren, L., 1992, Budgetary participation and managerial performance: The impact of
information and environmental volatility, The Accounting Review 67 (July), 511526.
Lau, C. M., and S. L. C. Tan, 2003, The effects of participation and job-relevant
information on the relationship between evaluative style and job satisfaction,
Review of Quantitative Finance and Accounting 21 (July), 17-34.

26

Libby, T., S. Salterio, and A. Webb, 2004, The balanced scorecard: The effects of
assurance and process accountability on managerial judgment, The Accounting
Review (October), 1075-1094.
Lipe, M. G., and S. E. Salterio, 2000, The balanced scorecard: Judgmental effects of
common and unique performance measures, The Accounting Review (July),
283-298.
Luft, J. and M. D. Shields, 2003, Mapping management accounting: Graphics and
guidelines for theory-consistent empirical research, Accounting, Organizations
and Society 28 (2-3), 169-249.
Magner, N., R. B. Welker, and T. L. Campbell, 1996, Testing a model of cognitive
budgetary participation processes in a latent variable structural equations
framework, Accounting and Business Research 27 (Winter), 41-50.
McWhorter, L. B., 2003, The association between the perceived strategic linkage of
performance measures and managerial role stress, Presented at the American
Accounting Associations Management Accounting Section Research Conference
and Case Symposium, San Diego, CA, January.
McWhorter, L. B., C. Henle, and Z. Byrne, 2003, An investigation of organizational
justice and job performance outcomes associated with strategic performance
measurement system use, Presented at the American Accounting Associations
Accounting, Behavior and Organizations Section Research Conference and Case
Symposium, Denver, CO, October.
Meadow, C. T., and W. Yuan, 1997, Measuring the impact of information: Defining the
concepts, Information Processing & Management 33 (November), 697-714.
Medsker, G. J., L. J. Williams, and P. J. Holahan, 1994, A review of current practices for
evaluating causal models in organizational behavior and human resources
management research, Journal of Management 20 (Summer), 439-464.
Mia, L., 1989, The impact of participation in budgeting and job difficulty on managerial
performance and work motivation: A research note, Accounting, Organizations
and Society 14 (4), 347-358.
Miller, K. I., and P. R. Monge, 1986, Participation, satisfaction, and productivity: A
meta-analytic review, Academy of Management Journal 29 (December), 727-753.
Nunnally, J. C., 1978, Psychometric Theory, New York, NY, McGraw-Hill Book Company.

27

ODonnell, E., and J. S. David, 2000, How information systems influence user decisions:
A research framework and literature review, International Journal of Accounting
Information Systems 1 (December), 178-203.
OReilly, C. A., III., 1980, Individuals and information overload in organizations: Is more
necessarily better?, Academy of Management Journal 23 (December), 684-696.
Otley, D., 1999, Performance management: A framework for management control
systems research, Management Accounting Research 10 (December), 363-382.
Robbins, S. P., 2003, Organizational Behavior, Tenth edition, Upper Saddle River, NJ,
Prentice Hall.
Schneiderman, A. M., 1999, Why balanced scorecards fail, Journal of Strategic
Performance Measurement (January), 6-11.
Sheely, R. A., and J. F. Brown, Jr., 2004, A re-examination of the effect of job-relevant
information on the budgetary participation-job performance relation during an
age of employee empowerment, Presented at the Annual Meeting of the
American Academy of Accounting and Finance, New Orleans, LA, December.
Spector, P. E., 1986, Perceived control by employees: A meta-analysis of studies
concerning autonomy and participation at work, Human Relations 39
(November), 1005-1016.
Venkatraman, G., and M. Gering, 2000, The balanced scorecard, Ivey Business Journal
64 (January-February), 10-13.
Viator, R. E., 2001, The association of formal and informal public accounting mentoring
with role stress and related job outcomes, Accounting, Organizations and Society
26 (1), 73-93.
Webb, R. A., 2004, Managers commitment to the goals contained in a strategic
performance measurement system, Contemporary Accounting Research 21
(Winter), 925-958.
Wheaton, B., B. Muthen, D. F. Alwin, and G. F. Summers, 1977, Assessing reliability and
stability in panel models, Sociological Methodology 8, 84-136.
Young, S. M., 1996, Survey research in management accounting: A critical assessment,
In Research Methods in Accounting: Issues and Debates, edited by A.
Richardson, 55-68, Vancouver, BC, CGA-Canada Research Foundation.

28

TABLE 1
Multi-trait Matrix of Latent Constructs
PDM

PDM

0.846

JRI

0.440

0.865

PERF

0.087

0.206

0.870

DM Use - Customer

0.220

0.292

0.087

0.856

DM Use - Other

0.291

0.392

0.154

0.396

Where:

PDM
JRI
PERF
DM Use

=
=
=
=

JRI

PERF

DM Use Customer

Latent Construct

DM Use Other

0.820

Participative Decision-Making
Job-Relevant Information
Individual Performance
Decision-Making Use

Note: The diagonal represents the standardized Cronbachs alpha, while the off-diagonal values are
the correlation coefficients between the latent constructs.

29

TABLE 2
Descriptive Statistics
No. of
Items

Mean
(Std Dev)

Actual Range
(Theoretical)

PDM

3.828
(1.546)

1.00 to 7.00
(1.00 to 7.00)

JRI

4.768
(1.270)

1.00 to 7.00
(1.00 to 7.00)

PERF

5.787
(0.734)

2.33 to 7.00
(1.00 to 7.00)

DM Use - Customer

5.070
(1.387)

1.00 to 7.00
(1.00 to 7.00)

DM Use - Other

4.646
(1.068)

1.00 to 7.00
(1.00 to 7.00)

Latent Construct

Where:

PDM
JRI
PERF
DM Use

=
=
=
=

Participative Decision-Making
Job-Relevant Information
Individual Performance
Decision-Making Use

30

TABLE 3
Tests for JRI as a Mediator (Hypothesis 1)
Panel A: Initial tests for mediation
Coefficient

p-value

0.051

0.009

PDM (A) JRI (B) path

0.416

0.000

JRI (B) PERF (C) path

0.124

0.000

3. PDM (A) JRI (B)

0.416

0.000

4. JRI (B) PERF (C)

0.125

0.000

Model evaluated
1. PDM (A) PERF (C)
2. PDM (A) JRI (B) PERF (C)a

This model does not include the PDM (A) PERF (C) path
Where:

PDM = Participative Decision-Making


JRI = Job-Relevant Information
PERF = Individual Performance

Panel B: SEM results comparing models with and without the PDM-PERF path
Model Ab

Model B

PDM

PERF
0.416
(0.000)

JRI

0.124
(0.000)

Model A constrains the PDM (A) PERF (C) path to zero.

Significance test of model comparison: Change in

-0.003
(0.880)

PDM
0.416
(0.000)

JRI

PERF
0.127
(0.000)

2= 0.023, change in df = 1, p-value = 0.880


31

TABLE 4
Tests for DM Use as a Mediator (Hypothesis 2)
Panel A: Initial tests for mediation
Coefficient

p-value

0.125

0.000

JRI (A) DM Use (B) path

0.374

0.000

DM Use (B) PERF (C) path

0.123

0.000

3. JRI (A) DM Use (B)

0.369

0.000

4. DM Use (B) PERF (C)

0.112

0.000

Model evaluated
1. JRI (A) PERF (C)
2. JRI (A) DM Use (B) PERF (C)a

This model does not include the JRI (A) PERF (C) path
Where:

JRI = Job-Relevant Information


PERF = Individual Performance
DM Use = Decision-Making Use

Panel B: SEM results comparing models with and without the JRI-PERF path
Model Ab

Model B

PERF

JRI

0. 103
(0. 000)

JRI
0.123
(0.000)

0.374
(0.000)

DM Use

PERF

0.060
(0.057)
0.369
(0.000)

DM Use

Model A constrains the JRI (A) PERF (C) path to zero.

Significance test of model comparison:


Change in

2= 15.763, change in df = 1, p-value = 0.000

32

FIGURE 1
Theoretical Model

H1+

PDM
H1+

PERF
H1+

JRI
H2+
H2+

DM
Use

Where:

PDM
JRI
PERF
DM Use

=
=
=
=

Participative Decision-Making
Job-Relevant Information
Individual Performance
Decision-Making Use

33

FIGURE 2
SEM Full Model Results

-0.009
(0.676)

PDM

PERF
0.108
(0.000)

0.424
(0.000)

JRI
0.060
(0.060)
0.375
(0.000)

DM
Use

Where:

PDM
JRI
PERF
DM Use

=
=
=
=

Participative-Decision-Making
Job-Relevant Information
Individual Performance
Decision-Making Use

34

ENDNOTES

1
1

Typical financial measures are return on investment (ROI) and earnings per share

(EPS), while examples of nonfinancial measures include market share, percentage of


defective products, and employee satisfaction.
2

The Balance Scorecard Collaborative recognizes organizations that have achieved

significant performance gains using SPMSs. For a list of these organizations, see
www.bscol.com/bscol/hof/members.
3

For example, see Libby et al. (2004), Webb (2004), Bryant et al. (2004), and Davis
and Albright (2004).

For instance, see Lipe and Salterio (2000), Libby et al. (2004), and Webb (2004).

Expectancy theory is a widely accepted theory for explaining motivation. According to


this theory, individuals will choose a decision option that has the greatest motivation.
This motivation is a function of three relationships: expectancy, instrumentality and
valence. Expectancy involves an individuals perception as to whether his/her effort
will lead to desired performance. Instrumentality refers to the individuals belief that
meeting performance expectations will result in increased rewards. Finally, valence
relates to the appeal of the expected rewards to the individual. See Robbins (2003)
for a more thorough explanation of the theory.

Kren (1992) classifies JRI in a budget setting as decision-facilitating under the


contention that access to JRI improves subordinates knowledge of possible action
choices because they are better informed.

35

Statement of Financial Accounting Concepts No. 2 also identifies relevance and


reliability as the two primary characteristics of useful information.

The reply card information was used to reduce the second survey mailing to only
those individuals who had not previously returned the reply card. Reply cards were
received from 912 individuals.

To increase response rates, we used several procedures recommended by Brownell


(1995), Young (1996), and Dillman (2000). First, we personalized each cover letter
and reply card, then hand-signed the cover letter in blue ink. Second, we attached a
short hand-written, personalized note to each survey requesting the individuals
participation in the study. Third, we hand-addressed each mailing label and used
stamps in lieu of a postage meter. Fourth, one week after the first mailing, we sent a
postcard reminding individuals to complete the survey. Finally, about four weeks
later, we sent a second survey mailing to individuals who had not returned the reply
card.

10

Cohen and Cohen (1983) recommend a sample size of approximately 200 to achieve
power of 0.80 and a significance level of 0.05.

11

For these four performance evaluation items, respondents could also select an eighth
category of not evaluated. These responses were treated as missing values in the
analyses.

12

However, confidentiality cannot be assured if the information is obtained from the


respondents superior (Chenhall and Brownell 1988; Kren 1992).

36

13

According to Byrne (2001), the measurement model identifies the relations between
the observed variables (the items used for measurement) and the underlying
constructs that they are intended to measure. In contrast, the structural model
defines the manner in which measured constructs are expected to influence other
variables in the model.

14

Based on these indices, the error variances for two DM UseOther items (Information
systems capabilities and Organizational procedures) were allowed to covary.

15

After review, the failure of the financial, customer, and quality categories seems
reasonable because these measures are often found in all types of PM systems. The
resulting DM Use scale consists of the remaining, less common nonfinancial
performance measures, which would be the categories that could be expected to
differ in use along the traditional-SPMS continuum.

37

You might also like