You are on page 1of 19

http://adh.sagepub.

com
Resources
Advances in Developing Human
DOI: 10.1177/1523422305286156
2006; 8; 265 Advances in Developing Human Resources
Hsin-Chih Chen and Sharon S. Naquin
Assessment Center, and Multi-Rater Assessment
An Integrative Model of Competency Development, Training Design,
http://adh.sagepub.com/cgi/content/abstract/8/2/265
The online version of this article can be found at:
Published by:
http://www.sagepublications.com
On behalf of:
Academy of Human Resource Development
can be found at: Advances in Developing Human Resources Additional services and information for
http://adh.sagepub.com/cgi/alerts Email Alerts:
http://adh.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.nav Reprints:
http://www.sagepub.com/journalsPermissions.nav Permissions:
http://adh.sagepub.com/cgi/content/refs/8/2/265 Citations
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
An Integrative Model of
Competency Development,
Training Design, Assessment
Center, and Multi-Rater
Assessment
Hsin-Chih Chen
Sharon S. Naquin
The problem and the solution. Although assessment center has
been proven effective in predicting performance, the issue of estab-
lishing construct-related validity of assessment center is still unsolved,
resulting in an unmet research challenge.Woehr and Arthur asserted
that the lack of construct-related validity in assessment center liter-
ature is primarily due to issues of design and development.This arti-
cle focuses on the design aspect of assessment center to develop an
integrative competency-based assessment center model that links
competency development, training design, assessment center, and
multi-rater assessment together. Built around validity (particularly
construct-related) issues of assessment center, the model guides
scholarly practitioners on how to design a competency-based assess-
ment center that has potential to improve construct-related validity
and capability to build into training design and assessment and other
human resource functions. Nine propositions related to validity were
developed in accordance with the model to evoke future research.
Practical implications are also provided.
Keywords: assessment center; competency modeling; performance
assessment
A review of related literature indicates that researchers have not reached a clear
definition of competency. The term sometimes refers to outputs of competent
performers and sometimes refers to underlying characteristics that enable an
individual to achieve outstanding performance (Dubois & Rothwell, 2004;
Hoffmann, 1999; McLagan, 1997). Most definitions, however, relate to exem-
plary performers or performance in a specific job or job level (Boyatzis, 1982),
whereas a relevant term, core competency, is tied to strategic, future-oriented,
Advances in Developing Human Resources Vol. 8, No. 2 May 2006 265-282
DOI: 10.1177/1523422305286156
Copyright 2006 Sage Publications
ADHR286156.qxd 3/30/2006 9:04 PM Page 265
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
collective functions in organizational level (Moingeon & Edmondson, 1996;
Prahalad & Hamel, 1990). Thus, we have adopted an overarching perspective
that combines both the performance and strategic aspects associated with the
various definitions found in the literature. We consider competency
to refer to the underlying individual work-related characteristics (e.g., skills,
knowledge, attitudes, beliefs, motives, and traits) that enable successful job
performance, where successful is understood to be in keeping with the orga-
nizations strategic functions (e.g., vision, mission, uniqueness, future orienta-
tion, success, or survival).
A similar construct, competency development or competency modeling,
refers to the process of identifying a set of competencies representative of
job proficiency. With the generic term just defined, competency development
can enhance various human resources (HRs) and organizational development
activities including personnel selection, job promotion, training and develop-
ment, training needs analyses, performance appraisal, individual career planning,
HR planning, placement, strategic planning, succession planning, compen-
sation, and recruitment (Byham & Moyer, 2004; Howard, 1997; Lucia &
Lepsinger, 1999).
It is understandable that assessment strategies and methodologies are
closely related to competency, and a common assessment strategy is the use of
assessment center. Assessment center is not a brick-and-mortar research cen-
ter or building. It is, rather, an abstract concept that exists in practice and refers
to standardized procedures used for assessing behavior-based or performance-
based dimensions whereby participants are assessed using multiple exercises
and/or simulations (Thornton, 1992). Common simulation exercises used in an
assessment center setting include oral presentations, leaderless group discus-
sions, role-playing, in-basket exercises, oral fact-finding, business games, and
integrated simulations (Thornton & Mueller-Hanson, 2004). Dimensions for
assessment (equated to competencies) are usually identified through job analy-
sis. However, it should be noted that although the terms job analysis and com-
petency modeling are often used interchangeably, the two differ in terms of
assessment of reliability, strategic focus, and expected outcome (Shippmann
et al., 2000).
Research Problems
Research on assessment center has evolved over the past few decades as
researchers have moved from focusing on an understanding of what an assess-
ment center is and how it works to establishing some criterion-related validity
and generalizability (Howard, 1997). However, the issue of establishing construct-
related validity of assessment center is still unsolved, resulting in an unmet
research challenge (Robie, Osburn, Morris, Etchegaray, & Adams, 2000).
Construct-related validity refers to the degree to which a theoretical concept is
operationalized and the degree to which the operational inference exhibits
Advances in Developing Human Resources May 2006 266
ADHR286156.qxd 3/30/2006 9:04 PM Page 266
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
consistency of what a researcher intends to measure. In current assessment cen-
ter literature, it fundamentally refers to discrepancies between competencies and
the measures that are used to demonstrate such competencies in assessment cen-
ter activities. Woehr and Arthur (2003) asserted that the lack of construct-related
validity in assessment center literature is primarily due to issues of design and
development. This challenge also clearly relates to an ongoing debate as to
whether the design of assessment center should be based on dimensions and
competencies or tasks and exercises (Byham, 2004; Howard, 1997; Joyce,
Thayer, & Pond, 1994; Lowry, 1995).
On the other hand, as mentioned, although assessment center has received a
wide range of applications in HR-related functions, the applications appear to
be piecemeal and not systematically connected. More important, its utilization
in human resource development (HRD) practice is relatively sparse (Chen,
2006 [this issue]). An obviously and immediately useful application of assess-
ment center to HRD is for assessing effectiveness of competency-based train-
ing. Because HRD is deeply rooted in the design and development of learning
activities across various levels, integrating the HRD perspective into assess-
ment center has strong potential to contribute to assessment center literature in
resolving the construct-related validity issues of assessment center. Meanwhile,
the application of the assessment center to HRD can also help the HRD field,
particularly the design of training assessment, move further away from cogni-
tive or reactive assessments toward behavioral assessmenta more reliable
measure.
Another issue existing in assessment center literature is that changes of indi-
vidual behavior can be readily observed through assessment center activities.
It is regrettable that the ability to assess implicit behavior (e.g., motivation,
emotion, beliefs, values, visions, etc.) through an assessment center is limited.
In contrast, multi-rater assessment such as dual-ratings assessment can poten-
tially be more effective in assessing implicit behavioral competencies, but these
methods are not able to provide the level of information regarding tangible
outcomes that assessment center can. This is primarily due to the fact that
assessment center typically involves observation of outcomes or performance
behaviors, whereas the multi-rater assessment method relies on perceptions
and/or memories of behavior. Accordingly, assessment center and multi-rater
assessment seem to complement each other perfectly (Howard, 1997).
Research Purposes
The purpose of this article is to develop a competency-based assessment
center design model that integrates competency development, training design,
assessment center activities, and multi-rater assessment strategies. Because of
the integration, the competency-based assessment center evidently differs from
traditional assessment center in scope. As mentioned, we have adopted an over-
arching definition of competency that includes the organizations strategic
Chen, Naquin / AN INTEGRATIVE MODEL 267
ADHR286156.qxd 3/30/2006 9:04 PM Page 267
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
Advances in Developing Human Resources May 2006 268
functions, so the traditional assessment center, which mainly serves selection
and promotion purposes, no longer satisfies the extended scope. Indeed, tradi-
tional assessment center is developed through job analysis to identify individ-
ual work-related characteristics. Such a mechanism is current in nature; it has
often overlooked or has limited ability to appropriately respond to an organi-
zations strategic, future-oriented functions. To the contrary, a competency-
based assessment center rooted by competency development and integrated
with multi-rater assessment can overcome or complement the limitation. This
model attempts to serve multiple purposes. First, it introduces a systematic
approach to linking competency development, training design, assessment cen-
ter strategies, and multi-rater assessment. Second, it provides a design process
that has the potential to enhance the construct-related validity of an assessment
center. Third, the model helps develop a set of propositions for future research.
Conceptual Framework
The model is guided by best practice and research in competency-based
development, training design, assessment center, and multi-rater assessment.
It is important to note that the following notions are not intended to be com-
prehensive. Instead, they provide readers with a generic understanding of how
the model is framed. Only key concepts that underpin the purpose of this arti-
cle are included.
Competency-Based Development
Common practice of competency development is through quantitative
and/or best practice approaches to develop a set of competencies character-
ized by individual skills, knowledge, behaviors, and traits. The quantitative
approach is through reorganization of exemplary performers on a specific job
and identification of their characteristics toward the successful performance
on the job (e.g., Spencer & Spencer, 1993). The best practice approach is
through adoption of an existing competency model (e.g., leadership skills
identified by a benchmarked organization or institute) and is often followed
by a dynamic customization of competencies for use in a particular organiza-
tion (e.g., Naquin & Holton, 2003).
Competency-Based Training Design
A generic difference between traditional training and competency-based
training designs is that the former is learning-focused whereas the latter is
based on performance. Accordingly, competency-based training must tie to
work-related performance outcomes such as transfer of learning or behavior
change. Blank (1982) identified four major characteristics of competency-based
ADHR286156.qxd 3/30/2006 9:04 PM Page 268
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
Chen, Naquin / AN INTEGRATIVE MODEL 269
programs that are essential for competency-based training design: outcome
driven, trainee centered, task mastering, and high level of proficiency in a job-
related setting.
Assessment Center
As mentioned, the assessment center is a standardized procedure used for
assessing behavior-based or performance-based dimensions whereby partici-
pants are assessed using multiple exercises and/or simulations. According
to Joiner (2000), an assessment center should include 10 key components:
(a) job analysis, behavior classification, (b) assessment techniques, (c) multi-
ple assessments, (d) simulations, (e) assessors, (f) assessor training, (g) record-
ing behavior, reports, and (h) data integration. Common errors of assessment
centers (Caldwell, Thornton, & Gruys, 2003) were also considered in devel-
oping the model. These errors as described by Caldwell et al. (2003) include
(a) poor planning, (b) inadequate job analysis, (c) weakly defined dimensions,
(d) poor exercises, (e) lack of pretest evaluation, (f) unqualified assessors, (g)
inadequate assessor training, (h) inadequate candidate preparation, (i) sloppy
behavior documentation and scoring, and (j) misuse of results.
Multi-rater Assessment
Multi-rater assessment is also known as 360-degree feedback assessment or
multisource assessment. Similar to the assessment center that has gone beyond
its traditional application for selection and promotion, research related to 360
degree has also reached beyond its traditional application for management devel-
opment to other HR functions such as performance appraisal (Toegel & Conger,
2003). Multi-rater assessments collect information from individuals and their
subordinates, peers, supervisors, and customers with regard to their perceptions
of research interests, such as performance and developmental feedback. The
process involves an individuals self-evaluation against a set of criteria and in
comparison to norms from other raters about the individual. In other words, mul-
tisource assessment or feedback is through an objective lens and is a dynamic
process that provides developmental or evaluation information about ones per-
formance or behavior. Wimer and Nowack (1998) suggested 13 common mis-
takes using 360-degree feedback including (a) unclear purpose, (b) using it as a
subtitle for managing a poor performer, (c) lack of pilot testing, (d) no key stake-
holder involvement, (e) insufficient communication among people involved in
the process, (f) compromising confidentiality, (g) lack of clarifying the feedback
to be used, (h) insufficient resources for implementation, (i) lack of clarification
of ownership of the data, (j) unfriendly administration and scoring, (k) improper
link to existing systems without a pilot, (l) treating it as an end, not a process,
and (m) lack of measuring effectiveness.
ADHR286156.qxd 3/30/2006 9:04 PM Page 269
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
Advances in Developing Human Resources May 2006 270
Competency-Based Assessment Center Design Model
The model consists of eight components as a practical guide for designing
a competency-based assessment center. Built around the model are nine propo-
sitions presented below and followed by the authors rationale for each. The
model can be found in Figure 1.
Building a Hierarchical Competency System
The first step in developing a competency-based assessment center is to
build a hierarchical competency system that breaks a whole into supporting
parts. This step is critical because it lays out a framework to guide training
design and assessment center measurement. The number of levels of compe-
tencies depends on the complexity of a system. The task of identifying com-
petencies in current competency modeling practice takes various forms. Some
identify competencies in specific ways such as in performance or behavioral
indicators (e.g., respond to customers inquiry politely and consistently, adjust
equipment in terms of a mechanical manual, etc.). Others describe them in
generic or abstract terms (e.g., communication, problem solving, networking,
team building, etc.). As Holton and Lynham (2000) pointed out, competen-
cies are less specific than tasks, but more job related than learning objectives
alone (p. 11).
For discussion purposes, we divide the hierarchical competency system
into three levels: competencies, subcompetencies, and procedures or steps.
Competencies are described in collective, abstract form, whereas their sup-
porting subcompetencies are more measurable, specific, but less collective
than competencies. Subcompetencies normally consist of a set of observable,
specific, behavior-based steps. The three-level hierarchical competency sys-
tems appear to be effective in communicating with stakeholders and linking
competency to training design and assessment center. The first level, which is
in abstract form, can be easily communicated in discussing competency issues
with stakeholders. The second-level items (the subcompetencies) are the
action statements that support competencies. The third level provides detailed
guidelines for achieving the action statements in level two.
Much of the literature in competency development (not assessment center)
addressing validity issues focuses on face or content validity. Specifically, deter-
mining whether the competencies are valid is most often based on subject
matter experts or managers judgment. In other words, validity is examined
through a qualitative rather than quantitative lens. The hierarchical competency
system can serve as a conceptual framework for quantitative research to enhance
construct-related validity of competencies. For example, the competencies can
be used as constructs to be assessed, whereas the subcompetencies are vari-
ables to represent the constructs. Researchers can use subcompetencies to
develop a questionnaire or survey that can be distributed to a targeted sample.
ADHR286156.qxd 3/30/2006 9:04 PM Page 270
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
Chen, Naquin / AN INTEGRATIVE MODEL 271
The collected data can be analyzed through factor analysis (e.g., Naquin &
Chen, 2006) to examine the relationship between competencies and subcom-
petencies. Caldwell et al. (2003) pointed out a common error of assessment
center practicesweakly defined dimensions or competencies. However,
Practical Track (Step by Step Competency-Based
Assessment Center Design)
Building Hierarchical Competency System
Develop a three-level competency system.
Embrace both qualitative and quantitative approaches
to develop and refine definitions of competencies.
Designing Competency-Based Assessment Center Materials
Use customized materials to enhance fidelity.
Develop action-oriented supporting performance
indicators.
Determining Appropriate Competency-Based
Assessment Center Activities
Use numeric scale rather than dichotomous scale at
the subcompetency level to determine appropriateness
of assessment center activities.
Determining Performance Outcomes for Activities
Performance outcomes are informed by job outcomes
in training design and subcompetencies in competency
development (See Figure 2 for details).
Leverage number of performance outcomes in
an activity.
Selecting and Developing Assessors
Select assessors from two levels higher than
individuals to be assessed in the organization.
Clearly identify training objectives and performance
guidelines in the assessor training.
Use an experienced, skilled trainer for assessor training.
Developing Subcompetency-Assessment Center
Activity Matrix
Use subcompetency rather than competency to
develop the matrix.
Build multi-rater assessment into matrix design.
Linking Subcompetency to Competency-Based Training
Design and Competency-Based Assessment Center
Subcompetncies serve as central links to training
design and competency-based assessment center
design (See Figure 2 for details).
Differentiating Implicit and Explicit Behavior
Differentiate explicit-behavioral and implicit-behavioral
subcompetencies.
Proposition 9: Well-trained assessors will contribute to criterion-
related validity of competency-based assessment center.
Proposition 7: Using customized assessment center materials
which are designed to closely relate to participants work
settings will lead to a stronger predictive validity of competency-
based assessment center.
Proposition 8: Developing customized materials for different
individuals (e.g., administrators, assessors, resource persons,
and role players, etc.) involved in the competency-based
assessment center and building extraneous factors (e.g.,
setting, technology, and level of difficulty of indicators) into
design will lead individuals to better understand the process
of assessment center and therefore can indirectly improve
the construct-related validity of competency-based
assessment center (rating accuracy).
Proposition 6: Measuring no more than 10 sub-competencies
in an activity will enable assessors to accurately assess the
sub-competencies that are supposed to be measured. Doing
this will increase the construct-related validity of the competency-
based assessment center.
Proposition 5: Using a numeric rating scale rather than a
dichotomous scale will lead to an appropriate assessment
center activity selection. Therefore, the numeric scale
will indirectly influence the construct-related validity
of competency-based assessment center.
Proposition 3: Using subcompetencies, which collectively
represent competencies in a more observable way, to develop
the competency-based assessment center activity matrix will
enhance the construct-related validity of competency-based
assessment center.
Proposition 4: Differentiating between explicit-behavioral
and implicit-behavioral subcompetencies will improve the
construct-related validity of competency-based assessment
center where explicit-behavioral subcompetencies are
measured by traditional assessment center mechanisms,
and implicit-behavioral subcompetencies are assessed
by multi-rater assessments.
Proposition 1: Using factor analysis, in addition to
qualitative competency development, to examine
construct-related validity of competencies will help
refine the definition of the competency and enhance the
validity of competencies-based assessment center.
Proposition 2: Linking subcompetency to training
program and assessment center will improve the construct-
related validity of the competencies-based assessment center.
Research Track (Research Propositions)
FIGURE 1: Integrated Competency-Based Assessment Center Model
ADHR286156.qxd 3/30/2006 9:04 PM Page 271
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
factor analysis can easily allow the researcher to determine how well the
competencies were defined and can serve as a means to help refine the defin-
itions of the competencies. Therefore, we develop the following proposition:
Proposition 1: Using factor analysis in addition to qualitative competency development to
examine construct-related validity of competencies will help refine the definitions of
the competencies and enhance the validity of competency-based assessment center.
Linking Subcompetency to Competency-Based
Training Design and Competency-Based Assessment Center
In the hierarchical competency system that the model depicts, the subcom-
petencies serve as key links between competency-based training design and
the design of the assessment center. In a competency-based training design, the
subcompetencies serve as desired job outcomes, representing what training
participants are expected to perform when they return to their respective jobs.
In an assessment center design, the subcompetencies serve as performance
outcomes that participants are expected to demonstrate in an assessment cen-
ter. The relationships among competency model, the training program, and the
assessment center can be found in Figure 2.
As shown in Figure 2, competency-based assessment center is linked
through subcompetencies, job outcomes, or performance outcomes. The state-
ments for these three components will be identical whereas each of their
supporting components (e.g., step or procedures, learning objectives, or per-
formance indicators) may be different in description. Through this design, the
intended measures for each stage are strictly connected. Therefore, we develop
the following proposition:
Proposition 2: Linking subcompetency to a training design and assessment center will
improve the construct-related (competency) validity of competency-based assessment
center.
Developing Subcompetency
Assessment Center Activity Matrix
Developing a competency exercise matrix is a basic requirement for
assessment center development (Joiner, 2000). Current practices for develop-
ing such a matrix are conducted at competency level, which is an abstract level
(e.g., Halman & Fletcher, 2000). However, using an abstract competency to
develop assessment center exercises can potentially jeopardize the validity of
selected assessment center activities because such a matrix cannot identify the
most appropriate activities to assess the competencies. For example, from a
generic view, one may select role-play activities to assess an individuals com-
munication competency. However, the communication competency can
Advances in Developing Human Resources May 2006 272
ADHR286156.qxd 3/30/2006 9:04 PM Page 272
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
encompass written and oral skills, hence the role-play does not address all nec-
essary communication skills. Therefore, we develop the following proposition:
Proposition 3: Using subcompetencies, which collectively represent competencies in a
more observable way, to develop the assessment center activity matrix will enhance the
construct-related validity of competency-based assessment center.
Chen, Naquin / AN INTEGRATIVE MODEL 273
Competency Model
1

Competencies
3
Subcompetencies
46
Steps or Procedures
5
Training Program
1
Job Outcomes
678
Learning Objectives
78
Assessment Center
2
Performance Outcomes
69
Performance Indicators
10
Multi-Rater Assessment
2
Performance Outcomes
69
Performance Indicators
10
Competency-Based
Assessment Center
12
1. Competency model triggers training and competency-based assessment center designs.
2. Competency-based assessment center includes a traditional assessment center and a
multi-rater assessment.
3. Competencies are in collective, abstract form.
4. Sub-competencies are more measurable, specific but less collective than competencies.
5. Steps or procedures are observable, specific, and behavior-based. Steps or procedures
are in very specific form and described in support of sub-competencies, which are in terms
to support competencies.
6. Sub-competencies inform job outcomes in training program design and performance
outcomes in assessment center design and multi-rater assessment center. Statements of
sub-competencies, job outcomes, and performance outcomes are identical.
7. Learning objectives are in support of job outcomes in a training design.
8. Job outcomes are work-related outcomes, whereas learning objectives are supported
by training materials.
9. Performance outcomes are general indicators that assessment center and multi-rater
assessments are targeted to measure.
10. Performance indicators are specific indicators in support of performance outcomes.
FIGURE 2: Relationship Between Competency Model, Training Program, Assessment Center,
and Multi-rater Feedback Assessment
ADHR286156.qxd 3/30/2006 9:04 PM Page 273
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
Differentiating Implicit and Explicit Behavior
As previously mentioned, an assessment center has limited ability in
measuring individuals implicit characteristics. Although one may argue that
implicit behavior can be measured by transferring it to explicit format, it evi-
dently cannot be effectively managed in an assessment center. This is because
implicit behavior is fairly complex and enduring. If not appropriately rendered,
it can easily jeopardize the validity of the assessment center. Consequently,
assessing the implicit-behavioral competencies in a traditional assessment
center could create more problems than it can solve. It is very likely that the
construct-related validity issue from an assessment center results from a lack
of differentiation between explicit-behavioral and implicit-behavioral compe-
tencies. A multi-rater assessment appears to be a more effective tool in assign-
ing implicit behavior. Integrating multi-rater assessment in the assessment
center design also provides the flexibility to reduce complexity and avoid com-
mon errors. (See more discussions in latter section.) Therefore, we develop the
following proposition:
Proposition 4: Differentiating between explicit-behavioral and implicit-behavioral sub-
competencies will improve the construct-related validity of competency-based assess-
ment center, where explicit-behavioral subcompetencies are measured by traditional
assessment center mechanisms and implicit-behavioral subcompetencies are assessed
by multi-rater assessments.
Determining Appropriate Competency-Based
Assessment Center Activities
Current research in developing the competency (or subcompetency) activ-
ity matrix uses simple check marks to determine the exercise to be used for a
particular competency. However, this approach provides no information on
how well the competencies fit the exercises. We suggest using numeric ratings
such as a 5-point Likert-type scale to determine the appropriateness of assess-
ment center activities to the subcompetencies by treating the matrix as a ques-
tionnaire. This approach not only helps in alleviating subjective decisions but
also provides more valid information on the degree to which subcompetencies
fit assessment center activities.
This approach will require a group of participants to rate the questionnaire
and then calculate aggregated scores on the collected data. Although it sounds
impractical to involve a group of individuals to rate the matrix, if this approach
can enhance the competency-based assessment center design validity, it should
be considered. As a matter of fact, as long as each activity in a matrix devel-
opment is clearly defined, any manager or trainer in an organization should be
able to serve as raters for the questionnaire.
Moreover, according to the Guidelines and Ethical Considerations for
Assessment Center Operations (Joiner, 2000), to increase the chance of
Advances in Developing Human Resources May 2006 274
ADHR286156.qxd 3/30/2006 9:04 PM Page 274
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
obtaining objective data, each dimension or competency should include
more than one assessment exercise. The numeric scale has merit in assist-
ing a competency-based assessment center designer to select the most
appropriate activity to be used. The following strategies are designed to
help determine the most appropriate activities to be used in an assessment
center:
(a) Select two top-ranked activities for each of the subcompetencies.
(b) If more than two activities are tied as top-ranked, consider
using all of them.
(c) If no rating for a particular subcompetency is greater than 3.0,
its applicability to any of the exercises is low. Therefore, con-
sider a multi-rater questionnaire as a more appropriate approach
to assess the subcompetency.
Based on the rationale just discussed, the following proposition is developed:
Proposition 5: Using a numeric rating scale (along with appropriate subcompetency selec-
tion strategies) rather than a dichotomous scale will lead to an appropriate assess-
ment center activity selection. Therefore, the numeric scale will indirectly influence
the construct-related validity of competency-based assessment center.
Determining Performance Outcomes for Activities
This step requires composing a list of appropriate subcompetencies or per-
formance outcomes (the two top-ranked activities) related to each of the activ-
ities. These performance indicators will be aligned with the activity design.
It is important to note that the competency-based assessment center designers
should not overrely on quantitative data as presented here to design the assess-
ment center activities, because quantitative data are only meaningful if well
interpreted. Activity designers should always review these subcompetencies to
examine the appropriateness of fit. Our suggestion is to move less congruent
subcompetencies in an activity to multi-rater assessment.
In addition, research suggests that one activity should not include too many
measures; otherwise, the assessors ratings could be biased by intuition due
to the limitation of ones cognitive abilities in differentiating complex situa-
tions in time-limited situations (Lievens & Klimoski, 2001). When the number
of subcompetencies increases, a competency-based assessment center activity
designer should use judgment to avoid the problem of measuring too many
subcompetencies in a single exercise or activity. Thornton (1992) suggested
5 to 10 dimensions (subcompetencies in this context) to be assessed for
various assessment centers, whereas Thornton and Mueller-Hanson (2004)
stated that in practice, consultants only measure 4 or 5 dimensions in an exer-
cise. Synthesizing the findings and suggestions in these literatures, it is rea-
sonable to assert that no more than 10 dimensions are practical for an activity.
Chen, Naquin / AN INTEGRATIVE MODEL 275
ADHR286156.qxd 3/30/2006 9:04 PM Page 275
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
On the other hand, from a cost-effective perspective, for an activity with fewer
than 5 subcompetencies to be measured, it is also reasonable to eliminate the
activity and move the subcompetencies classified in this activity to multi-rater
assessment. Therefore, we develop the following proposition:
Proposition 6: Measuring no more than 10 subcompetencies in an activity will enable
assessors to accurately assess the subcompetencies that are supposed to be measured.
Doing this will increase the construct-related validity of the competency-based assess-
ment center measurement.
Designing Competency-Based Assessment Center Materials
There are two methods or models for developing assessment center materi-
als: using off-the-shelf materials and using customized materials. Thornton
(1992) suggested that fidelity of assessment center design would help
improve the validity of performance outcomes. The notion of fidelity is essen-
tial to design activities or cases that closely relate to participants day-in-the-
life work situation. Therefore, the use of a customized model adds more value
to this systematic competency-based assessment center design. Based on the
rationale, the following proposition is developed:
Proposition 7: Using customized assessment center materials, which are designed to
closely relate to participants work settings, will lead to a stronger criterion-related
(predictive-related) validity of competency-based assessment center.
Thornton and Muller-Hanson (2004) suggested that several sets of exercise
materials must be designed for various individuals involved in the exercises.
These individuals include participants, administrators, assessors, resource
persons, and role-players. In addition to determining the subcompetency (per-
formance outcome) or supporting performance indicators for exercise devel-
opment, a competency-based assessment center designer should also consider
factors such as setting, technology, and level of difficulty of the indicators
when designing exercise materials. Therefore, the following proposition is
developed:
Proposition 8: Developing customized materials for different individuals (e.g., adminis-
trators, assessors, resource persons, role-players, etc.) involved in the competency-
based assessment center and building extraneous factors (e.g., setting, technology, and
level of difficulty of indicators) into design will lead individuals to better understand
the process of assessment center and, therefore, can indirectly improve the construct-
related validity of competency-based assessment center (rating accuracy).
In developing multi-rater assessments, the supporting performance indica-
tors should be as action-oriented as possible (e.g., starting with an action verb).
An appropriate supporting performance indicator may include the performance
Advances in Developing Human Resources May 2006 276
ADHR286156.qxd 3/30/2006 9:04 PM Page 276
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
to be measured, the condition in which the performance occurs, and the crite-
rion to determine effectiveness or efficiency of the performance (Mager, 1997).
Selecting and Developing Assessors
Selecting and developing qualified assessors usually go hand in hand. In
selecting assessors, Spychalski, Quinones, Gaugler, and Pohley (1997) found
that best practice incorporates line or staff management as assessors and these
assessors are generally two organizational levels higher than the individuals to
be assessed, whereas some assessment center practices used psychologists as
assessors. In addition, research on the effect of assessors individual back-
ground shows mixed results. For example, Gaugler, Rosenthal, Thornton, and
Bentson (1987) found that an assessment center that used psychologists as
assessors exhibited higher criterion-related validity than managerial assessors.
However, Thomson (1970) found no significant differences between ratings of
psychologist and manager assessors. Although the assessment center guide-
lines suggested considering professional psychologists as assessors, from a
practical standpoint it is plausible to select assessors from the target organiza-
tion. The more important point is perhaps to keep these selected assessors
(e.g., managers) well trained on how to assess assessees performance before
engaging in an assessment center activity.
In addition, according to the guidelines, the assessor training should
clearly state training objectives and performance guidelines. The objectives
of assessor training are to facilitate assessors gaining reliable and accurate
judgments. Contents in the assessor training may include (a) knowledge and
understanding on assessment dimensions, (b) definitions of dimensions,
(c) relationship to job performance, (d) examples of effective and ineffective
performance, (e) simulations on exercises to be assessed, (f) ratings issues,
(g) data integration, (h) feedback procedures, and so on. Training length
should be determined in connection with other considerations, such as trainer
and instructional design, assessor capability, and assessment program. It is
also important to consider establishing a continually improving training
system to help assessors maintain skills, knowledge, and attitudes. More
detailed issues related to assessor training can be found in the assessment
center guidelines (see Joiner, 2000).
Finally, a trainer of assessor training should be familiar with simulation
exercises, have a deep understanding of issues related to assessor training, and
continually communicate with competency-based assessment center designers
and a program champion. This is because the competency-based assessment
center designers are expert in functions of an assessment center design,
whereas program champions have broader insights on how the program works.
Both can contribute to the success of assessor training if the communication
system is well established and utilized.
Chen, Naquin / AN INTEGRATIVE MODEL 277
ADHR286156.qxd 3/30/2006 9:04 PM Page 277
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
Proposition 9: Well-trained assessors will contribute to the criterion-related validity of
competency-based assessment center.
Discussions and Implications for HRD Practice
Designing and implementing an assessment center is labor-intensive,
time-consuming, expensive, and difficult to manage (Dulewicz, 1991). Perhaps
these are the major reasons that assessment centers do not receive enough atten-
tion in HRD. This article ties competency development to training program
design, assessment center, and multi-rater assessment, resulting in an integra-
tive, competency-based assessment center model that has profound implica-
tions for HRD practice. Some training effectiveness practices that focus on
participants reaction and learning add little to how participants can apply what
they learned to their jobs. Implementing a competency-based assessment center
in an organization can appropriately respond to such a question. In addition, as
this article demonstrates, it is certain that the competency-based assessment
center design model can also be easily integrated with other HR functions
(e.g., promotion, selection, performance appraisal, etc.) through a common link,
subcompetencies. These facts seem obvious, but many organizational stake-
holders do not naturally recognize them. To enhance communications between
HRD practitioners and organizational stakeholders with regard to the benefits
of adopting competency-based assessment center, we have developed a set of
strategies for practical considerations.
Educating Organizational Stakeholders
The major objective for this strategy is to advocate various advantages
and applications of assessment center to organizational stakeholders. Specific
information that can be provided may include (a) the reasons that assessment
center is important, (b) how an assessment center can make changes, (c) the
differences between traditional assessment center and competency-based
assessment center, and (d) benefits of competency-based assessment center to
individual development and organizational effectiveness as a whole.
Collecting and Providing Cost-Effective Information
Cost-effectiveness is always a concern for organizational decision makers.
For a competency-based assessment center to be adopted by an organization,
it is imperative to collect cost-effective information of existing assessment
center practices from other organizations and to provide information regarding
costs associated with a competency-based assessment center program that will
be implemented. Providing information regarding the strategies that will be
utilized to maintain cost-effectiveness is also necessary.
Advances in Developing Human Resources May 2006 278
ADHR286156.qxd 3/30/2006 9:04 PM Page 278
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
Providing Comparative Information
This strategy mainly deals with providing information on what other orga-
nizations in the same industry have done regarding an assessment center and
how well the assessment center has helped the organization improve perfor-
mance. The information allows decision makers to justify how an assessment
center can change their organization.
Articulating Purposes of the Competency-Based Assessment
Center and the Purposes for Which the Data Are to Be Used
Because an assessment center can be used for various purposes, it is impor-
tant that competency-based assessment center designers and implementers
fully articulate the purposes and describe how the data collected from the cen-
ter will be used. When a new tool such as a competency-based assessment cen-
ter is implemented for evaluation purposes, it is inevitable that resistance will
be met. Anxiety and motivation to change are often related to resistance.
Therefore, articulating the purposes could be the key to reducing the anxiety
and enhancing the motivation for stakeholders to adopt the program.
Implementing a Pilot Test
If a process of a competency-based assessment center design aims to facil-
itate a customized assessment center, implementing a pilot test can provide
formative feedback for program design. The pilot test also helps in examining
practicality and other issues (e.g., culture fit) that may arise when implement-
ing an assessment center in an organization.
Communicating Responsibilities
The implementation of a competency-based assessment center cannot solely
rely on designers or implementers. Organizational stakeholders involvement
and support will be key to its success. Therefore, communicating responsibil-
ities before a center is implemented is as important as the other strategies pro-
posed here.
Implications for HRD Research
Through the development of competency-based assessment center, this arti-
cle provided nine propositions to guide future research in HRD. We advise that
the propositions should be examined entirely because the propositions closely
link to the model and are all related to validity issues. For example, a project the
authors led in designing a competency-based assessment center for assessing
Chen, Naquin / AN INTEGRATIVE MODEL 279
ADHR286156.qxd 3/30/2006 9:04 PM Page 279
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
learning outcomes of a statewide leadership and management training program
in the state of Louisiana has adopted the concepts and processes proposed in
this article (see Melancon & Williams, 2006 [this issue]). The adoption allows
researchers to empirically examine all the propositions. On the other hand,
these propositions could also be examined individually. For example, one may
assess whether a set of subcompetency definitions refined by a result of a factor
analysis contributes to the enhancement of a competency-based assessment
center. Another example would be to examine whether differentiating explicit-
behavioral subcompetencies (as measured by a traditional assessment center)
and implicit-behavioral subcompetencies (as measured by multi-rater assess-
ment) will lead to an improvement of construct-related validity of competency-
based assessment center.
Conclusions
Traditional assessment centers have been challenged by lack of strong
construct-related validity. This articlethrough a systemic, integrative
perspectivefocuses on design aspects of a competency-based assessment
center to enhance validity issues of assessment centers. The integrative model
not only expands the scope of traditional assessment centers by incorporating
multi-rater assessment into design but also guides HRD practitioners on how
to design a competency-based assessment center that has potential to improve
construct-related validity and has capability to build into training design,
assessment, and other HR functions. In addition, the model provides a set of
research propositions to be examined.
References
Blank, W. E. (1982). Handbook for developing competency-based training programs.
Englewood Cliffs, NJ: Prentice Hall.
Boyatzis, R. E. (1982). The competent manager: A model for effective performance.
New York: John Wiley.
Byham, W. C. (2004). Developing dimension-competency-based human resource
systems. Retrieved March 25, 2004, from http://www.ddiworld.com/research/
publications.asp
Byham, W. C., & Moyer, R. P. (2004). Using competencies to build a successful orga-
nization. Retrieved March 25, 2004 from http://www.ddiworld.com/research/
publications.asp
Caldwell, C., Thornton, G. C., & Gruys, M. L. (2003). Ten classic assessment center
errors: Challenges to selection validity. Public Personnel Management, 32, 73-88.
Chen, H.-C. (2006). Assessment center: A critical mechanism for assessing HRD
effectiveness and accountability. Advances in Developing Human Resources, 8(2),
247-264.
Dubois, D. D., & Rothwell, W. J. (2004). Competency-based human resource manage-
ment. Palo Alto, CA: Davies-Black.
Advances in Developing Human Resources May 2006 280
ADHR286156.qxd 3/30/2006 9:04 PM Page 280
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
Dulewicz, V. (1991). Improving assessment centers. Personnel Management, 23, 50-55.
Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., III, & Bentson, C. (1987). Meta-
analysis of assessment center validity. Journal of Applied Psychology, 72, 493-511.
Halman, F., & Fletcher, C. (2000). The impact of development centre participation
and the role of individual differences in changing self-assessments. Journal of
Occupational and Organizational Psychology, 73, 423-442.
Hoffmann, T. (1999). The meanings of competency. Journal of European Industrial
Training, 23, 275-285.
Holton, E. F., III, & Lynham, S. A. (2000). Performance-driven leadership develop-
ment. Advances in Developing Human Resources, 6, 1-17.
Howard, A. (1997). A reassessment of assessment centers: Challenges for the 21
st
century. Journal of Social Behavior and Personality, 12, 13-52.
Joiner, D. A. (2000). Guidelines and ethical consideration for assessment center oper-
ations: International task force on assessment center guidelines. Public Personnel
Management, 29, 315-331.
Joyce, L. W., Thayer, P. W., & Pond, S. B., III. (1994). Managerial functions: An
alternative to traditional assessment center dimensions? Personnel Psychology, 47,
109-121.
Lievens, F., & Klimoski, R. J. (2001). Understanding the assessment center process:
Where are we now? In C. L. Cooper, & I. T. Robertson (Eds.), International Review
of Industrial and Organizational Psychology (Vol. 16). New York: John Wiley.
Lowry, P. E. (1995). The assessment center process: Assessing leadership in the public
sector. Public Personnel Management, 24, 443-450.
Lucia, A. D., & Lepsinger, R. (1999). The art and science of competency models:
Pinpointing critical success factors in organizations. San Francisco: Jossey-Bass.
Mager, R. F. (1997). Preparing instructional objectives. Atlanta, GA: CEP Press.
McLagan, P. A. (1997). Competencies: The next generation. Training and Development,
51, 40-47.
Melancon, S. C., & Williams, M. (2006). Competency-based assessment center design:
A case study. Advances in Developing Human Resources, 8(2), 283-314.
Moingeon, B., & Edmondson, A. (1996). Organizational learning and competitive
advantage. Thousand Oaks, CA: Sage.
Naquin, S., & Chen, H.-C. (2006). Construct validation of the Louisiana
Managerial/Supervisory Survey. In F. M. Nafukho & H.-C. Chen (Eds.), Academy
of Human Resource Development Conference Proceedings (pp. 1400-1407).
Bowling Green, OH: Academy of Human Resource Development.
Naquin, S. S., & Holton, E. F., III. (2003). Redefining state government leadership and
management development: A process for competency-based development. Public
Personnel Management, 32, 23-46.
Prahalad, C. K., & Hamel, G. (1990). The core competence of the corporation. Harvard
Business Review, 68, 79-91.
Robie, C., Osburn, H. G., Morris, M. A., Etchegaray, J. M., & Adams, K. A. (2000).
Effects of the rating process on the construct-related validity of assessment center
dimension evaluations. Human Performance, 13, 355-370.
Shippmann, J. S., Ash, R. A., Pattista, M., Carr, L., Eyde, L. D., Hesketh, B., et al.
(2000). The practice of competency modeling. Personnel Psychology, 53, 703-740.
Spencer, L. M., Jr., & Spencer, S. M. (1993). Competence at work: Models for supe-
rior performance. New York: John Wiley.
Chen, Naquin / AN INTEGRATIVE MODEL 281
ADHR286156.qxd 3/30/2006 9:04 PM Page 281
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from
Spychalski, A. C., Quinones, M., Gaugler, B. B., & Pohley, K. (1997). A survey of
assessment centre practices in organizations in the United States. Personnel
Psychology, 50, 71-90.
Thomson, H. A. (1970). Comparison of predictor and criterion judgments of managerial
performance using the multitrait-multimethod approach. Journal of Applied
Psychology, 54, 496-502.
Thornton, G. C., III. (1992). Assessment center in human resource management.
Reading, MA: Addison-Wesley.
Thornton, G. C., III, & Mueller-Hanson, R. A. (2004). Developing organizational
simulations. Mahwah, NJ: Lawrence Erlbaum.
Toegel, G., & Conger, J. A. (2003). 360-degree assessment: Time for reinvention.
Academy of Management Learning and Education, 2, 297-311.
Wimer, S., & Nowack, K. M. (1998, May). 13 common mistakes using 360-degree
feedback. Training and Development, 52(5), 69-70.
Woehr, D. J., & Arthur, W., Jr. (2003). The construct-related validity of assessment
center ratings: A review and meta-analysis of the role of methodological factors.
Journal of Management, 29, 231-258.
Hsin-Chih Chen, PhD, is a statistician/research analyst at Amedisys, Inc., a leading
provider of home health care services, where he conducts data-driven research on qual-
ity of services, market analyses, and corporate strategies across all levels. Prior to join-
ing Amedisys, Inc., he served as a postdoctoral researcher at Louisiana State University.
He has published a number of research articles in human resource development, peer
reviewed journals, and currently serves as associate editor for the 2006 International
Conference Proceedings of Academy of Human Resource Development. His recent
research interests include competency-based development, assessment center, transfer of
learning, and effectiveness, strategy, and philosophy of human resource development.
His doctorate was completed in human resource development at Louisiana State
University.
Sharon S. Naquin, PhD, is director of the Louisiana State University (LSU) Division
of Workforce Development and an associate professor in the LSU School of Human
Resource Education. She has conducted extensive research in managerial and
leadership competency development and works with municipal and private agencies on
strategic planning and organizational development initiatives.
Chen, H.-C., & Naquin, S. S. (2006). An integrative model of competency development,
training design, assessment center, and multi-rater assessment. Advances in Developing
Human Resources, 8(2), 265-282.
Advances in Developing Human Resources May 2006 282
ADHR286156.qxd 3/30/2006 9:04 PM Page 282
by Vctor Hugo Arancibia on October 26, 2009 http://adh.sagepub.com Downloaded from

You might also like