You are on page 1of 8

INTRODUCTION

Billions of examinations and assessments are administered every year across educational institutions,
industries and around the globe. And what are these tests for? To us, it is to accurately and so
scientifically measure the knowledge and competence of an individual. By delivering a test, you are
aiming to use the information provided by the test in making a decision about the individuals taking the
test. Are they competent to practice a certain profession? Do they have the basic knowledge and skills
needed to succeed at university? Do they understand the road rules to safely drive a car?

Tests have been traditionally administered through the paper and pencil Tests (PPT) method but at the
advent of the twentieth century, researchers sought to find a more or rather convenient means of
administering tests. Thus the introduction of the Computer based test (CBT), the CBT continues to
impress and has seen more and more institutions adopt it as a reliable means of test administration.

In Nigeria, the Joint Admissions and Matriculation Board, University of Ilorin, National Open University
amongst others, have adopted this method and plans are on the way for other educational institutions and
Exam bodies to follow suit.

In this paper we would try to compare and contrast the Paper and Pencil Tests (PPT) and Computer Based
test (CBT).

PAPER AND PENCIL TEST

Paper-and-pencil test refer to a general group of assessment tools in which candidates read questions and
respond in writing. This includes tests, such as knowledge and ability tests, and inventories, such as
personality and interest inventories.

Paper-and-pencil tests can be used to assess subject/Course, job-related knowledge and ability or skill
qualifications. Because many candidates can be assessed at the same time with a paper-and-pencil test,
such tests are an efficient method of assessment.

Developing paper-and-pencil tests: 4 basic steps.

In developing a PPT all assessment methods must provide information that is relevant to the Course or
Subject being assessed. The following 4 steps ensure that paper-and-pencil tests provide this information.

1, Listing topic areas

For each knowledge area that will be assessed by the test, list the topic areas to be covered. Check off
any critical topic areas that are particularly important to the course or subject.
For example, the topic areas that will be covered for us to pass this course include Psychological tests,
psychometrics, test construction, test standardization, psychological constructs and their
measurements and applications to computer programming.
2, Specifying the response format, number of questions, the time limit and difficulty level
Prior to writing the questions for your test, you should decide on such things as the response format,
the number of questions, the time limit and the difficulty level.

What type of response format should I choose?


The three most common response formats are:
(a) multiple-choice;
(b) short answer; and
(c) essay.

(a) Multiple-choice
With a multiple-choice response format, a large number of different topic areas can be covered within
the same test and the questions are easy to score. However, because all potential answers must be
chosen by some students, it is time-consuming to write good questions.

(b) Short-answer
With a short-answer response format, a large number of different topic areas can be covered within the
same test and these questions are easy to score. In addition, less time is required to write these
questions compared to multiple-choice ones.

(c) Essay
With an essay response format, only a few topic areas can be covered due to the amount of time it
takes to answer questions; however, the content can be covered in greater detail. Essay questions
require little time to write but they are very time-consuming to score.
Although at first glance a multiple-choice format may seem a relatively easy and logical choice if
breadth of coverage is emphasized, don't be fooled. It is hard to write good multiple-choice questions
and you should only choose this type of response format if you are willing to devote a lot of time to
editing, reviewing, and revising the questions. If depth of coverage is emphasized, use an essay
response format.
While preparing a paper and pen test, the number of questions you need depends on the breadth and
depth of coverage required and the importance of each topic area. Generally, the more important a
topic area, the more questions you should have. You should initially write several questions from each
topic area so that you can choose the best ones for the final version of the test. With a multiple-choice
response format, 30 questions are often sufficient. If you are using a short-answer or essay format,
fewer questions will be required.
Unless speed is being assessed, the time limit should be set to allow the majority of students to finish
within the allotted time. While developing questions one should one should design the tests to have a
range of difficulty levels.
3, Writing the questions and developing the scoring guide.
All questions should tap meaningful information. Also, the level of language used for the questions
should be appropriate for the students to fully understand. The questions do not have to always be
expressed verbally. Diagrams, graphs, or tables may be incorporated into a question where useful.
When you are writing each question, you should prepare the answers, designate the marks to be
allotted to each item, and decide on the rules for scoring. This ensures that there is a clear-cut answer
for each question. It also allows you to indicate the value of each question so that candidates can
decide for themselves the amount of time they should spend on each question. The marks assigned to
each question should reflect the relative importance of the question.

(a) Multiple-choice
The scoring guide for multiple-choice questions must include a scoring key indicating the correct
answer and it may also include a rationale for or explanation of the correct answer. If marks are to be
deducted for guessing, this must be determined and stated in the instructions to students.

(b) Short-answer
The scoring guide for short-answer questions should include predetermined scoring procedures and
mark allocations. Each required point in the answer should be listed with its relative mark allocation.

(c) Essay
The scoring guide for essay questions should include predetermined scoring procedures and mark
allocations. The major points of the answer should be listed with their relative mark allocation. If
marks are to be deducted for incorrect grammar, spelling and punctuation, this must be stated in the
instructions to students.
The scoring guides for short-answer and essay questions should be clear enough so that scorers can
judge whether or not marks should be given to a variation of the answer.
4, Reviewing questions and scoring guide.
Have the questions and scoring guide reviewed by senior colleague or any other teacher familiar with
the Course or Subject. Have them confirm that the questions are answerable and not too difficult or
not too easy, and that it conforms to the course outline provided at the beginning of the programme.
Also have them make necessary grammatical corrections and edit any ambiguous, inappropriate or
misleading content. The reviewer should also assess the appropriateness of the time limit and
difficulty level of the questions. You could as well have them take the test in testing conditions. You
should only include in the final version of the paper-and-pencil test those questions that your
reviewers endorsed.
Example 1:
Below is an example of a multiple-choice question and scoring guide (with accompanying rationale)
designed to assess the ability to solve numerical problems.

Question:
1. You are responsible for purchasing equipment and furniture. How many filing cabinets can you
purchase with a budget of N6,000, if 50 filing cabinets cost N10,000? Choose the correct answer from
among the four choices provided. No marks will be deducted for guessing. (Please note that the use of
calculators is not permitted.)

1. 15
2. 20
3. 25
4. 30

Scoring guide rationale:

d.30
N10,000/50=N6,000/X
X=N300,000/10,000
X=30
Example 2:
Below is an example of a short answer question and scoring guide designed to assess the knowledge
of Psychological tests.

Question:
1. What are the 4 major characteristics of a Psychological test? (4 marks)
Reliability
Validity
Norms
Relative Power

Scoring guide: 1 point each

Advantages of Paper-And-Pencil Test

1. A large number of different topic areas can be covered within the same test.
2. The questions are easy to score.
3. Paper and pencil are easily portable and can be used in any setting.
4. It is manual in nature i.e.you don't need electricity or batteries to use paper and pencil.
5. You dont need any computer skills to write a paper pencil test. The only skills needed is reading
and writing skills.
6. It is very flexible and cheap to set up.

Disadvantageof Paper-And-Pencil Test

1. It is time-consuming to write good questions.


2. It takes time to evaluate
3. There is high chances of malpractice in this type of test
4. There is possible human error in the grading system

Computer Based Test

A Computer-Based Test, also known as Computer-Based Assessment, e-exam, computerized testing and
computer-administered testing, is a method of administering tests in which the responses are
electronically recorded, assessed, or both. As the name implies, Computer-Based Assessment makes use
of a computer or an equivalent electronic device (i.e. handheld computer). Computer-Based Assessment
enables educators and trainers to author, schedule, deliver, and report on surveys, quizzes, tests and
exams. Since the early 2000s, much has occurred in CBT. CBT seems to have advantages over paper and
pencil testing, both for states that run the assessment programs and for the students who participate in
them. There currently is strong interest in CBT and advocates have identified many positive merits of this
approach to assessment including: efficient administration, student preference, self-selection options for
students, improved writing performance, built-in accommodations, immediate results, efficient item
development, increased authenticity, and the potential to shift focus from assessment to instruction (e.g.,
Becker, 2006; Salend, 2009; Thompson et al., 2002). CBT also allows new ways of assessing students
that move beyond the traditional multiple choice and constructed response items. For example, innovative
assessments are now being developed that enable students to manipulate data and role play. Yet, as states
move forward with CBT they are discovering that it is important to consider not only the positive
benefits, but also potential negative unintended consequences. These include, for example, the possibility
that additional training will be needed for students with disabilities to interact successfully with
computers and the challenges of determining the best way to present some accommodations such as
screen readers.

Despite the fairly dramatic increase in attention to CBT, accessibility challenges continue to have the
potential to reduce the validity of the assessment results and to exclude some groups of students from
assessment participation. In the early years of CBT many fairly basic design issues baffled testing
companies and states as they sought to transfer paper and pencil tests onto a computer-based platform
(Thompson, Quenemoen, & Thurlow, 2006).

Contextual Issues Related to Computer-based Testing

The implementation of CBT occurs within a context that both supports and limits its use. In this
presentation, we would try to briefly address two of the contextual factors that surround CBT including:
(a) the technological capacity of schools to support CBT, (b) universal design applied to CBT,

Technological Capacity in Schools

Access to computers and Internet capabilities have for some time been a stumbling block for the push to
widespread use of computer-based and online assessments. For example, Becker (2006) questioned
digital equity in computer access, computer use, and state-level technology policies. Students in Rural
areas and Certain Urban area schools are likely not to have access to computers. And even if they do have
access most students arent properly trained on how to utilize the devices. In most schools, computers
available are just used for instruction purposes and most times the 1 computer could be used to teach a
class of over 30 students.

Universal Design Applied to CBT

The term universal design applied to assessments in general has been defined in several ways (CAST,
2009; Thompson, Thurlow, & Malouf, 2004). Universal design of assessment generally means an
approach that involves developing assessments for the widest range of students from the beginning while
maintaining the validity of results from the assessment. Universal design also sometimes refers to
multiple means of representation, action/expression, and engagement.

Dolan et al. (2009) prepared a set of guidelines specifically for computer-based assessments. The
principles address test delivery considerations, item content and delivery considerations, and component
content and delivery considerations. A variety of topics relevant to computer-based testing and universal
design is addressed in the component content and delivery considerations section of the guidelines (e.g.,
text, images, audio, video), with each organized according to categories of processing that students apply
during testing.
As the application of universal design principles to CBT has been considered, there also has been
increased attention to various assistive technology requirements. Assistive technology devices can include
such things as speech recognition or text to speech software, as well as sophisticated technology such as
refreshable braille displays or sip and puff technology (which allows individuals unable to use a mouse or
speech-to-text technology to send signals to a computer via a straw device using air pressure by sipping
and puffing). One of the challenges of CBT has been to ensure that the assistive technology that is needed
by students with disabilities is available and that the students know how to use it. Russell (2009) has
considered this challenge and developed a set of 15 capabilities that should be incorporated into the
computer or online platform (e.g., allowing all text appearing within each test item and on all interactive
areas of the screen, including menus and navigation buttons, to be read aloud using a human voice and
synthesized voices, etc.).

Advantage of Computer Based Test for Students

1. It gives you instant feedback, unlike paper examinations in a traditional classroom learning
session.
2. It gives you the option of taking practice tests whenever you want as some assessments are
Internet-based, and this allows students to take the test at home or anywhere they want.

Advantages of Computer Based Test for the Teacher, School Administrators and Examination
bodies.

1. Teacher can distribute multiple versions of the exams and assignments without having to
manually monitor which students got which tests.
2. It allows the teacher to quickly evaluate the performance of the group.
3. It takes up less time and effort.
4. All data can be stored on a single server.
5. Teachers can mix and match the question styles on exams, including graphics and make them
more interactive than paper exams.
6. Eliminates human error in grading.
7. This test is less prone to exam malpractice.
8. Purchasing or printing of answer sheets is reduced.
9. Printing of test booklets, in some cases alternate forms of the same test is reduced or completely
stopped.
10. Cost of Packaging and shipping of booklets and answer sheets to testing locations is saved.
11. Cost of Packaging and shipping for return of answer sheets and booklets from testing locations is
saved.
12. Cost of Warehousing of booklets and answer sheets, both before and after test administration is
saved.
13. Cost of Destruction of booklets and answer sheets after they have served their purpose is also
saved.

Disadvantage of Computer Based Test for Students

1. Not all students can use the computer very well.


2. It doesnt give teachers the options to see your line of thinking to get to your answer.

Disadvantage of Computer Based Test for Teachers

1. Technology isn't always reliable. Information can be lost if a system breaks down.
2. In some cases, teachers need some technical expertise to create exams.
3. The cost to set up an electronic assessment system in a learning institution or business training
environment is very expensive.
4. Testing online is not suitable for essay writing and analysis or cognitive thinking testing.

Comparison between Computer Based Test (CBT) and Paper Based Test (PPT)
1. Validity: Both Computer Based Test (CBT) and Paper Pencil Test (PPT) can be classified as
valid if they both measure what they are set out to measure. For example in an aptitude test for a
job, both CBT and PPT can be used to measure a test takers potentials for learning and ability to
perform in a new job or situation.
2. Reliability: both Computer Based Test (CBT) and Paper Pencil Test (PPT) can be classified as
being reliable i.e. the test results are consistent.
3. Topic area: Both Computer based test and paper pencil focus on specific topic area and task to
cover.
4. Both Computer Based Test (CBT) and Paper Pencil Test (PPT) could be in Multi Choice format
or in short answer format.

Difference between Computer Based Test (CBT) and Paper Based Test (PPT)

S/N Paper Pencil Test Computer Base Test


1 Paper-and-pencil instruments refer to a Computer-administered testing, is a method of
general group of assessment tools in administering tests in which the responses are
which candidates read questions and electronically recorded, assessed, or both.
respond in writing
2 Paper pencil test is very easy and Computer based test are very expensive to set
flexible. It can be conducted in any up and cannot be used in every situation.
situation
3 Requires only the skills of reading and Aside from reading and writing skills the
writing skills to prepare a paper pencil teacher and students needs computer skills to be
test able to use the computer based test
4. it is prone to human grading error It eliminates human grading errors
5 It takes time to evaluate the paper pencil It saves time to evaluate test scores
test
6 It takes time before the students know It gives instant feedback immediately after the
the outcome of the test test
7 It is highly prone to exam malpractice It is less prone to exam malpractice
8 Paper and pencil test is very suitable for Testing online is not suitable for essay writing
essay writing and analysis or cognitive and analysis or cognitive thinking testing.
thinking testing.

Conclusion:

In conclusion, can one really say one type of test administration is better than the other? Or which of these
assessment modes more accurately reveals the students' actual knowledge?
Bugbee (1996) recommends that the test developer must show that computer-based and paper-
based versions are equivalent, and/or must provide scaling information to allow the two to be equated.
Most instructors, and in fact, even most instructional designers, do not have the skill nor the time to craft
good tests. But additional time and effort must be invested by instructors to design high-quality test items.
With the likely proliferation of web-based courses, there will likely be an increase in computer-based
testing. Findings indicate that it is critical to realize that computer-based and paper-based tests, even with
identical items, will not necessarily produce equivalent measures of student learning outcomes.
Instructors and institutions should spend the time, cost, and effort to mitigate test mode effects.
References:
Bugbee, A. C. Jr. (1996). The equivalence of paper-and-pencil and computer-based testing. Journal of

Research on Computing in Education, 28 (3), 282-299.

Bunderson, C. V., Inouye, D. K., & Olsen, J.B. (1989). The four generations of computerized educational

measurement. In R. L. Linn (ed.), Educational Measurement, pp. 367-407, Washington, DC:

American Council on Education.

Clariana, R. B., & Moller, L. (2000). Distance learning profile instrument: Predicting on-line course

achievement. Presented at the Annual Convention of the Association for Educational

Communications and Technology, Denver, CO. [http://www.personal.psu.edu/rbc4/dlp_aect.htm]

Clark, R.E. (1994). Media Will Never Influence Learning. Educational Technology, Research and

Development, 42 (2), 21-29.

Federico, P. A. (1989). Computer-based and paper-based measurement of recognition performance. Navy

Personnel Research and Development Center Report NPRDC-TR-89-7. (Available from ERIC:

ED 306 308)

Mazzeo, J., Druesne, B., Raffeld, P. C., Checketts, K. T., & Muhlstein, A. (1991). Comparability of

computer and paper-and-pencil scores for two CLEP general examinations. College Board report

No. 91-5. (Available from ERIC: ED 344 902)

Mead, A.D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability

tests: A meta-analysis. Psychological Bulletin, 114, 449-458.

Parshall, C. G., & Kromrey, J. D. (1993). Computer testing versus paper-and-pencil: An analysis of

examinee characteristics associated with mode effect. A paper presented at the Annual Meeting

of the American Educational Research Association, Atlanta, GA, April. (Available from ERIC:

ED 363 272)

Wallace, P. E., & Clariana, R. B. (2000). Achievement predictors for a computer-applications module

delivered via the world-wide web. Journal of Information Systems Education, 11 (1), 13-18.

[http://gise.org/JISE/Vol11/v11n1-2p13-18.pdf]

You might also like