You are on page 1of 2

u05d1 Validity

In the Kaplan and Saccuzzo text, read Chapter 5, "Validity," pages 132156.
In the Standards text, read Chapter 1, "Validity," pages 724.
Discuss the concept of validity as it relates to testing and measurement. Select one of the forms
of validity and discuss it in-depth. Estimate the impact of this form of validity on the outcome of
an evaluation. Compose a strategy for addressing the validity of the tests you are likely to use in
your current or future occupation.

Response Guidelines
Compare and contrast the points contained in your post to the post of another learner. Discuss
how the postings are similar and how they differ.
Simply stated, a test is considered valid if it accurately measures what it is intended to measure.
Validity is one of the most fundamental unitary testing concepts in regards to the development
and evaluation of a testing instrument. Validity and reliability are related but fundamentally
different in the sense that validity measures accuracy and reliability measures precision.
According to the Standards for Educational and Psychological Testing there are three evidences
for test validity: (1) content-related, (2) criterion-related and (3) construct-related (AERA, APA,
& NCME, 1999). Content-related evidence for validity is a critical concern in educational
testing (Kaplan, R. M., & Saccuzzo, D. P., 2009). Content related evidence is concerned with
whether the content of a test accurately represents the subject matter the test is intended to cover.
Criterion-related evidence is related to how well a test correlates with another criterion
measurement. Lastly, construct-related evidence is a little more difficult to determine. A
construct is a scientific concept or mental synthesis that develops through a process of gathering
test data evidence over time. Construct-related evidence for validity is established by comparing
test results to other similar test measurements to further define and determine the meaning and
accuracy of what the test is intended to measure.
Content-related evidence for validity is the only type of evidence out of the three mentioned
above that is more logical than statistical (Kaplan, R. M., & Saccuzzo, D. P., 2009). Contentrelated evidence must be logically related to the subject matter the test is intended to measure. It
requires that the test developers consider the wording and reading level of a test as well.
Typically multiple judges are used to match the relevancy of the testing items to the test content
(Rubio, Berg-Weger, Tebb, Lee, & Rauch, 2003, as cited in Kaplan, R. M., & Saccuzzo, D. P.,
2009). Two concepts that are relevant to content-related validity are construct
underrepresentation and content-irrelevant variance (AERA, APA, & NCME, 1999). Construct
underrepresentation simply means that the test lacks important constructs that can accurately
measure the content. For example, if an English test is missing a section on grammar the test is
invalid because of construct underrepresentation. Content-irrelevant variance is a factor

irrelevant to the construct that can influence a test score like reading level, vocabulary, anxiety,
etc.
The most meaningful type of validity evidence for a clinical psychological is construct-related
evidence. When measuring depression, suicidal ideation or other psychological disorders,
psychologists often measure the relationship of the construct with other similar tests and
measurements to determine construct-related evidence. Convergent evidence is a type of
construct-related evidence that determines whether test measurements converge on the same
result as other tests. If a test measures other constructs that are not part of the convergent
evidence then the test may provide discriminant evidence for a unique construct. This can
provide helpful and additional information for a psychologist regarding a clients psychological
condition.
References
American Educational Research Association, American Psychological Association, & National
Council on Measurement in Education. (1999). Standards for educational and psychological
testing. Washington, DC: American Educational Research Association.
Kaplan, R. M., & Saccuzzo, D. P. (2009). Psychological testing: Principles, applications, and
issues (7th ed.). Belmont, CA: Cengage.

You might also like