You are on page 1of 7

Internet and Higher Education 12 (2009) 713

Contents lists available at ScienceDirect

Internet and Higher Education

Development of an instrument to measure perceived cognitive, affective, and psychomotor learning in traditional and virtual classroom higher education settings
Alfred P. Rovai , Mervyn J. Wighting 1, Jason D. Baker 2, Linda D. Grooms 3
School of Education, Regent University, 1000 Regent University Drive, Virginia Beach, Virginia 23464-9800, United States

a r t i c l e

i n f o

a b s t r a c t
The purpose of this study was to develop and validate a self-report instrument that can be used to measure learning in the cognitive, affective, and psychomotor domains. The study underwent three phases, each with its own data collection and analysis. Phase I featured the development, testing, and factor analysis of an 80-item instrument that addressed cognitive, affective, and psychomotor learning that was administered to a sample of 142 online and face-to-face learners. Based on the results, the instrument was reduced to 21 items for Phase II and tested with a new sample of 171 online and face-to-face students. The results of conrmatory factor analysis suggested a better data t with an even smaller 9-item instrument, which was then administered to a new sample of 221 online and face-to-face students in Phase III. The results of this nal phase are presented along with the resulting CAP Perceived Learning Scale, a 9-item self-report measure of perceived cognitive, affective, and psychomotor learning. Implications and usage of the CAP Perceived Learning Scale for research and practice are also discussed. 2008 Elsevier Inc. All rights reserved.

Article history: Accepted 7 October 2008 Keywords: Perceived learning Cognitive learning Affective learning Psychomotor learning Distance education Higher education

1. Introduction The emergence of the Internet as a mainstream communication medium has resulted in the development of new educational opportunities, such as instruction delivered via asynchronous learning networks, synchronous online seminars, blogs, wikis, podcasts, and 3D virtual worlds. The emergence of each new instructional technology provokes the question of whether teaching delivered using that technology is as effective as traditional models of face-to-face instruction as well as the relative effectiveness of various mixes of online technology tools in designing online courses. It seems this question is asked every time an educational approach differs from the norm, particularly if the new method involves cutting-edge technologies or mediated communication. The emergence of new Web 2.0 tools, such as blogs and wikis, in online learning places emphasis on the need to evaluate student learning outcomes in order to determine their impact on learning. Numerous studies demonstrate that alternative educational experiences, such as online learning, produce outcomes commensurate with face-to-face instruction provided the method and technologies are appropriate for the instructional objectives. Russell (1999) compiled the

Corresponding author. Tel.: +1 757 226 4861; fax: +1 757 226 4857. E-mail addresses: alfrrov@regent.edu (A.P. Rovai), mervwig@regent.edu (M.J. Wighting), jbaker@regent.edu (J.D. Baker), lindgro@regent.edu (L.D. Grooms). 1 Tel.: +1 757 226 4321; fax: +1 757 226 4857. 2 Tel.: +1 757 226 4447; fax: +1 757 226 4857. 3 Tel.: +1 757 226 4862; fax: +1 757 226 4857. 1096-7516/$ see front matter 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.iheduc.2008.10.002

results of over 350 research reports, summaries, and papers from the 1920s to the present that found that distance education (whether correspondence, videoconferencing, or online) is just as effective as traditional instruction. However, a meta-analysis of 232 comparative studies conducted by Bernard et al. (2004) concludes that while there is no average difference in achievement between distance and classroom courses, the results demonstrate wide variability. They note that a substantial number of [distance education] applications provide better achievement results, are viewed more positively, and have higher retention rates than their classroom counterparts. On the other hand, a substantial number of [distance education] applications are far worse than classroom instruction (p. 406). In other words, media usage or delivery method is not the sole determinant of educational effectiveness; additional variables, such as course design, pedagogical techniques, student characteristics, technology tools, and other factors affect the educational experience. Therefore, given the many factors that inuence learning and the wide variety of subject topics and instructional models, it would be helpful to both researchers and practitioners to compare educational effectiveness across a broad spectrum of instructional experiences and content areas. Such a cross-categorical approach to measuring educational effectiveness would not only prove helpful when comparing online, blended, and face-to-face instruction but also to compare different educational tools, techniques, and models to see which instructional designs work better for varied content and student populations. The challenge, however, is how to measure learning independent of the course content, instructor, institution, academic level, and other limiting factors.

A.P. Rovai et al. / Internet and Higher Education 12 (2009) 713

Dumont (1996) and Hiltz and Wellman (1997) report use of student grades is the most prevalent measure of cognitive learning outcomes. However, using grades to operationalize learning may not always provide the best results. Classroom test grades or nal course grades, particularly for graduate university courses, tend to have very restricted ranges (i.e., they tend to reect uniformly superior achievement) thus limiting their use in any correlation study. Whenever a variable's range is restricted, any correlation involving that variable will be articially reduced. Additionally, grades can have little relationship to what students learned. For example, students may already know the material when they enroll or their grade may be more related to class participation, timely assignment submission, or attendance than to learning. Furthermore, grades may not be a reliable measure of learning, particularly for performance tests, as different teachers and even the same teacher at different times will likely not assign grades in a consistent manner. Therefore, using grades as the sole measure of learning can be problematic, particularly when measuring learning outcomes across disparate courses and content areas, so a self-report instrument has advantages for such research. A comprehensive instrument that measures all three domains of learning would be benecial for researchers and practitioners since learning can involve cognitive, affective, and psychomotor components (Bloom, 1956). Accordingly, the purpose of the present research was to develop and validate an instrument that could be used to measure perceived learning. 2. Background Education follows the general sequence that starts with establishing educational goals and instructional objectives, followed by determining and teaching the curriculum, and ends with summative assessment of student learning. Teaching takes place within the context of educational philosophies based on theories of how people learn. These epistemological considerations inuence course design and pedagogy regarding the strategy teachers use to facilitate learning. While these theories and resulting teaching techniques vary, the idea of learning as acquisition and as participation has underpinned much educational thought (Sfard, 1998). Acquisition deals with the products of learning, such as knowledge, skills, attitudes, values, behavior, and understanding, while participation suggests active involvement in the learning process. Several taxonomies addressing the products of learning have been developed with perhaps the most well-established ones addressing three overlapping domains: cognitive, affective, and psychomotor learning. These domains can be measured effectively using self-report instrumentation (Corrallo, 1994). 2.1. Cognitive learning Bloom (1956) dened cognitive learning as dealing with recall or recognition of knowledge and the development of intellectual abilities and skills (p. 7). The six products associated with the cognitive domain (Anderson & Krathwohl, 2001; Bloom & Krathwohl, 1956) are (a) knowledge or the ability to recognize or recall information; (b) comprehension or the ability to demonstrate understanding by describing, paraphrasing, etc.; (c) application of learned information to solve a problem or answer a question; (d) analysis or breaking down a problem into its constituent parts; (e) evaluation or judging the worth of an idea using explicit criteria; and (f) creation or reorganizing knowledge into a new pattern. 2.2. Affective learning Kearney (1994) dened affective learning as an increasing internalization of positive attitudes toward the content or subject matter

(p. 81). The products of learning dealing with the affective domain (Anderson & Krathwohl, 2001; Krathwohl Bloom, & Masia, 1964) address interests, opinions, emotions, attitudes, and values. They focus on the development of attitudes and behavior rather than on the intellectual abilities upon which the cognitive domain is based. The ve products of learning associated with the affective domain are (a) receiving, or paying attention to some stimulus; (b) responding, or reacting to a stimulus in some way; (c) valuing particular ideas; (d) organizing different values, comparing them, and resolving conicts, and beginning to develop a personal value system; and (e) commitment to a coherent, internally consistent value system. Rodriguez, Plax, and Kearney (1996) suggest affective learning subsumes student motivation and promotes greater student learning because affective learning motivates students to engage in taskrelevant behaviors (p. 297). 2.3. Psychomotor learning The psychomotor domain addresses the fact that neither conscious knowledge nor values and attitudes are sufcient to explain effective performance of learned tasks. Learning in the psychomotor domain is associated with physical skills such as speed, dexterity, grace, use of instruments, expressive movement, and use of the body in dance or athletics (e.g. Anderson & Krathwohl, 2001; Simpson, 1974). The psychomotor domain addresses skills' development relating to manual tasks and physical movement as well as operation of equipment, such as a computer, and performances in science, art, and music. The ve products of learning associated with the psychomotor domain (Simpson, 1974) are (a) perception, such as detecting cues to act; (b) guided response such as being able to perform a specic act under the guidance of a teacher; (c) mechanism or the ability to perform a learned task without supervision; (d) complex overt response, or the ability to perform a complex pattern of acts; (d) adaptation, or the ability to alter an act to respond to a new situation; and (e) origination, or the ability to develop new acts. 2.4. Self-reports of learning In their study of instructor immediacy and cognitive learning, Richmond, Gorham, and McCroskey (1987) broke with previous research and developed a self-report measure of learning rather than using course grades or test scores. They explained that while tests were available for various subjects, their subject-specic nature made it difcult to conduct research across disciplines, and grades were also problematic since grading standards vary by instructor. They argued that since college students are adults with signicant educational experience, they are in a position to estimate accurately the amount they learn in a given class. In fact, it is likely that their estimate is at least as good as subjective grades provided by teachers in many classes or by tests administered in classes not based on clear behavioral objectives (p. 581). Research evidence suggests self-reports of learning, or perceived learning, can be a valid measure of learning. Pace (1990) supports the validity of student self-reports of learning based on research evidence that suggests the consistency of results over time and across different populations. He also found that patterns of outcomes vary for perceived learning across majors and length of study in the same manner as was established through direct achievement testing. However, the emphasis has been on measuring cognitive change. In a summary of the literature, Corrallo (1994) notes that there is a considerable literature concerned with establishing the validity of student selfreports about cognitive outcomes (p. 23). He concludes that selfreports of cognitive gain (i.e., perceived cognitive learning) are indicative of results obtained through more direct forms of assessment. Use of self-reports to measure cognitive learning is not without its critics. In particular, one can argue that students are not capable of

A.P. Rovai et al. / Internet and Higher Education 12 (2009) 713

accurately judging how much they learned. Zechmeister, Rusch and Markell (1986), for example, report that many students overestimate how much they know, and that poorer performing students have a greater degree of inaccuracy. Bem (1972) suggests students who have a more positive attitude toward their class and instructor may participate more in the class, and thus infer that they are learning more. 2.5. Self-report learning instruments There are few self-report instruments available to measure learning. Most of these instruments measure only cognitive learning, a few measure affective learning, and the psychomotor learning instruments are limited to specic contexts. Much of current research using perceived cognitive learning employs the two-item Learning Loss Scale developed by Richmond et al. (1987). Using a Likert-scale, the rst item asks students to estimate how much they learned in a particular course and the second asks students to estimate how much they could have learned with the ideal instructor. A learning loss score is computed by subtracting the score on the rst item from the score on the second item. Roach (1994) adapted this cognitive learning instrument by asking respondents to respond to the following two items on the same Likert-scale: (a) How much are you learning in this class? and (b) How much knowledge/understanding are you gaining in this class? Frymier and Houser's (1999) Revised Learning Indicators Scale is also used to measure perceived cognitive learning. This consists of seven items that measure such things as ability to explain course content to others, degree to which course content is contemplated outside of class, and overall perception of amount of learning from the class. This measure uses ve-step Likert items anchored by 0 = never and 4 = very often. The Revised Learning Indicators Scale has an alpha reliability of .82. Measurement of perceived affective learning is less prevalent in the research literature. An early published instrument with accompanying evidence of validity and reliability is the Affective Learning Scale, originally developed by Scott and Wheeless (1975), later revised and extended by Andersen (1979), and nally expanded and rened by McCroskey, Richmond, Plax, and Kearney (1985). This self-report instrument measures affective gain by asking students to respond to a series of affective items with a specic teacher and course in mind. It should be noted, however, that four of the 20 items on the original Scott and Wheeless (1975) instrument measure affective feelings regarding the course instructor rather than course content. Finding self-report psychomotor instruments is more difcult since the authors could only locate instruments that measure psychomotor learning in a specic context, such as operating machinery. No generalized psychomotor learning self-report instruments were located. 3. Methodology 3.1. Setting and participants The two universities contributing participants to the present study are located in the metropolitan Hampton Roads region in the Commonwealth of Virginia. This is largely an urban region with a population in excess of 1.5 million. Study participants (N = 221) for Phase III of the study reported in this article were enrolled in either traditional face-to-face (n = 64) or fully online (n = 157) courses. A total of 154 (69.7%) were females and 67 (30.3%) were males. Most students were education majors, perhaps explaining the greater percentage of females in the study. A total of 36 (16.3%) were African American, 8 (3.6%) were Asian/Pacic Islander, 165 (74.7%) were Caucasian, 6 (2.7%) were Hispanic, and 5 (2.3%) classied themselves

as other. A total of 8 (3.6%) participants were 1820 years old, 91 (41.2%) were 2130 years old, 63 (28.5%) were 3140 years old, 44 (19.9%) were 4150 years old, and 15 (6.8%) were over 50. 3.2. Instrumentation The Learning Loss Scale (Richmond et al., 1987) and the Affective Learning Scale (Andersen, 1979; McCroskey et al., 1985; Scott & Wheeless, 1975) were used in the present study in order to evaluate the concurrent validity of the CAP Perceived Learning Scale. Cognitive learning was measured using the Learning Loss Scale (Richmond et al., 1987), which required students to respond to two items. The rst item was On a scale of zero to nine, how much did you learn in this class, with zero meaning you learned nothing and nine meaning you learned more than in any other class you've had? The second item was How much do you think you could have learned in this class had you had the ideal instructor? By subtracting the score from the rst item from the second item, a learning loss score was created. This instrument has been widely used in instructional communication research in measuring cognitive learning when employing classes in different subject areas since creating a standardized achievement instrument for courses across multiple disciplines is not feasible (Chesebro & McCroskey, 2000; McCroskey & Richmond, 1992). Results of studies using this instrument have obtained testretest reliability estimates from .85 to .88 (Chesebro & McCroskey, 2000; McCroskey, Sallinen, Fayer, Richmond, & Barraclough, 1996; Richmond et al., 1987). Chesebro and McCroskey (2000) report evidence of the concurrent validity of this instrument with cognitive recall of learned information. Affective learning was measured using the Affective Learning Scale (Andersen, 1979; McCroskey et al., 1985; Scott & Wheeless, 1975). This self-report instrument measures affective gain by asking students to respond to a series of items with a specic teacher and course in mind. The instrument assesses students' attitudes toward (a) the course content, (b) the course instructor, and (c) the behaviors recommended in the course. Affective learning is measured in each of these three areas through the use of four 7-step semantic differential scales anchored by goodbad, worthlessvaluable, fairunfair, and positive negative. Higher scores represent more affective learning in the course. The alpha reliability estimates ranged from a low of .86 to a high of .98 (Gorham, 1988; Kearney & McCroskey, 1980; Kearney, Plax, & Wendt-Wasco, 1985; Plax, Kearney, McCroskey, & Richmond, 1986; Richmond, 1990). Kearney (1994) reports evidence of the validity of the Affective Learning Scale. 3.3. Procedures The present study was divided into three phases, each with its own data collection and analysis. For Phase I, 80 items were developed for the CAP Perceived Learning Scale (25 to 28 items per domain) that addressed all levels of learning for the cognitive, affective, and psychomotor domains. Exploratory factor analysis was conducted (N =142) that supported a three factor solution. However, substantial cross-loadings were noted on several items. As a result, in Phase II the scale was reduced to 21 items (seven items per domain) and administered to a new sample (N =171). Conrmatory factor analysis supported a three factor solution, but cross-loadings suggested a better data t might be attainable by deleting several items in order to optimize loadings. Consequently, the scale was reduced to nine items (three items per domain) and was administered to a new sample (N =221). The Results section below presents the statistical results of this nal testing phase. 3.4. Design and analysis The present study used a quantitative methodology to establish the extent of the validity and reliability of the CAP Perceived Learning Scale among higher education students in traditional and online

10 Table 1 Descriptive statistics (N = 221) Scale CAP Scale-Total CAP Scale-Cognitive CAP Scale-Affective CAP Scale-Psychomotor Affective Learning Scale Learning Gain Scale Learning Loss Scale Min 11.00 2.00 1.00 .00 50.00 .00 .00 Max 54.00 18.00 18.00 18.00 140.00 9.00 9.00

A.P. Rovai et al. / Internet and Higher Education 12 (2009) 713 Table 4 Inter-correlations of CAP Perceived Learning Scale items (N = 221) M 37.57 13.06 12.63 11.87 118.23 .88 .88 SD 9.23 3.38 3.85 4.62 20.42 1.43 1.43 Item (domain) 1. (Cognitive) 2. (Cognitive) 3. (Psychomotor) 4. (Affective) 5. (Cognitive) 6. (Affective) 7. (Psychomotor) 8. (Psychomotor) 9. (Affective) 1 2 .44 3 .31 ns 4 .24 ns .31 5 .46 .21 .19 .20 6 .49 .16 .46 .46 .35 7 .21 .17 .41 .16 ns .35 8 .34 ns .67 ns .22 .45 .43 9 .57 .23 .43 .37 .35 .67 .27 .38

Note: CAP Scale = Perceived Cognitive, Affective, and Psychomotor Learning Scale.

Note: p b .05. ns = not signicant. Table 2 Correlation matrix (N = 221) Measure 1. CAP Scale-Total 2. CAP scale-Cognitive 3. CAP Scale-Affective 4. CAP Scale-Psychomotor 5. Affective Learning Scale 6. Learning Gain Scale 7. Learning Loss Scale 1 2 .69 3 .82 .44 4 .80 .28 .48 5 .57 .52 .53 .32 6 .63 .50 .62 .36 .59 7 .38 .38 .39 .16 .39 .67

Note: p b .05. CAP Scale =Perceived Cognitive, Affective, and Psychomotor Learning Scale.

learning environments. Conrmatory maximum likelihood factor analysis of the data was conducted to examine construct validity and to determine the dimensionality of the instrument. Direct oblimin rotation, an oblique rotational method that allows some correlation between factors, was used in order to achieve a more interpretable simple structure given that the literature suggests the cognitive, affective, and psychomotor domains overlap. Reliability analysis was conducted using Cronbach's coefcient alpha in order to establish the internal consistency characteristics of the instrument. The specic procedures used for each analysis are described in greater detail in the Results section below. 4. Results Table 1 displays the descriptive statistics for the CAP Perceived Learning Scale, the Affective Learning Scale, the one-item Perceived Learning Gain Scale, and the two-item Perceived Learning Loss Scale. These statistics pertain to the nal data collection effort that was used to validate the 9-item CAP Perceived Learning Scale. Additionally, an internal consistency estimate of reliability was calculated using Cronbach's coefcient alpha. Reliability for CAP Perceived Learning Scale-Total was .79. CAP Perceived Learning Scale-Total scores can range from a low of 0 to a high of 54; CAP subscale scores can range from a low of 0 to a high of 18. Affective Learning Scale scores can range from a low of 20 to a high of 140; and the Learning Gain and Learning Loss Scales can each range from a low of 0 to a high of 9.
Table 3 Descriptive statistics for CAP Perceived Learning Scale items (N = 221) Item 1. I can organize course material into a logical structure. 2. I cannot produce a course study guide for future students. 3. I am able to use physical skills learned in this course outside of class. 4. I have changed my attitudes about the course subject matter as a result of this course. 5. I can intelligently critique the texts used in this course. 6. I feel more self-reliant as the result of the content learned in this course. 7. I have not expanded my physical skills as a result of this course. 8. I can demonstrate to others the physical skills learned in this course. 9. I feel that I am a more sophisticated thinker as a result of this course. M 4.79 3.90 4.31 3.93 4.38 4.25 3.90 3.72 4.45 SD 1.20 1.84 1.78 1.75 1.40 1.58 2.01 1.80 1.39

Table 2 presents the correlations among the scales identied in Table 1. As expected, there was a statistically signicant positive relationship between the CAP Perceived Learning Scale, the Affective Learning Scale, and the one-item Perceived Learning Gain Scale. The relationship between the CAP Perceived Learning Scale and the Perceived Learning Loss Scale was negative. Descriptive statistics for CAP Perceived Learning Scale items are displayed in Table 3. Table 4 is an inter-correlation matrix of scale items. It reveals that most test items are correlated with each other. Scores on all nine CAP Perceived Learning Scale items were analyzed using maximum-likelihood conrmatory factor analysis with direct oblimin rotation since subscales were related. The Kaiser MeyerOlkin measure of sampling adequacy was .78, suggesting that none of the items violated the factor analysis assumption of no multicollinearity. Additionally, Bartlett's test of sphericity yielded an approximate chi-square of 663.23 with 36 degrees of freedom, p b .001, providing evidence that the analyzed data do not produce an identity matrix and are thus approximately multivariate normal and acceptable for factor analysis. Three criteria were used to determine the number of factors to extract: the scree plot, the KaiserGutman Rule, and psychological meaningfulness. In the present study, the scree plot and the Kaiser Gutman Rule provided evidence that the hypothesis of unidimensionality was not supported since three factors possessed eigenvalues of 1.00 or greater. An examination of the structure and pattern coefcients suggested the three factor solution possesses good simple structure and can be meaningfully interpreted as perceived cognitive learning, perceived affective learning, and perceived psychomotor learning, despite the consequence of using an oblique rotation that produced more ambiguous item loadings. Table 5 displays CAP Perceived Learning Scale items, factor loadings, and communalities (h2). The factor loadings are expressions of the correlation of the item with the factor based on direct oblimin rotation. Additionally, the estimates of the communalities reect the percent of variance in a given item explained by the three-

Table 5 CAP Perceived Learning Scale items, factor loadings, and communalities (N = 221) Item 1. I can organize course material into a logical structure. 2. I cannot produce a course study guide for future students. 3. I am able to use physical skills learned in this course outside of class. 4. I have changed my attitudes about the course subject matter as a result of this course. 5. I can intelligently critique the texts used in this course. 6. I feel more self-reliant as the result of the content learned in this course. 7. I have not expanded my physical skills as a result of this course. 8. I can demonstrate to others the physical skills learned in this course. 9. I feel that I am a more sophisticated thinker as a result of this course. Note: Cognitive item; Affective item; Psychomotor item. Extracted factors: F1 Cognitive; F2 Affective; F3 Psychomotor. F1 F2 F3 h2

.92 .46 .38 .50 .48 .13 .13 .23 .28 .44 .72 .54 .22 .57 .23 .28 .49 .34 .26 .24 .47 .82 .58 .57 .20 .31 .46 .25 .34 .27 .97 .53 .57 .73 .50 .55

Note: Cognitive item; Affective item; Psychomotor item. Negatively worded items were reverse scored. Scores can range from a low of 0 to a high of 6 for each item.

A.P. Rovai et al. / Internet and Higher Education 12 (2009) 713

11

factor solution. Overall, the three maximum-likelihood factors accounted for 66.78% of the variance in the data. 5. Discussion This study presents a self-report instrument that can be used to measure perceived learning within the cognitive, affective, and psychomotor domains within online and face-to-face educational environments. The 9-item CAP Perceived Learning Scale was developed through three phases over progressively larger samples of face-to-face and online learners across two universities. This test instrument generates an overall CAP Perceived Learning Scale score representing perceived learning across all three of Bloom's (1956) domains as well as three subscales: cognitive learning, affective learning, and psychomotor learning. A factor analysis conrmed these three subscales as latent dimensions and supported the 9-item instrument over the previously tested 80 and 21 item versions. As seen in Table 5, most factor loadings were high (i.e., N .7) and the remaining loadings were moderately high (i.e., N .45). Although some cross-loadings are evident, due in part to the use of an oblique rotation, the three-factor solution accounted for a substantial amount (i.e., 67%) of the variance in the data and provides a valid measure of the three learning domains. One of the immediate benets of the CAP Perceived Learning Scale is its potential use within online learning research. Given the continued debates over the quality of various teaching modalities, especially within online and blended courses, it is critical to enable instructors and researchers to study educational effectiveness across courses, instructors, institutions, and formats. Tallent-Runnels et al. (2006) reviewed research into online teaching and concluded that the eld needs more systematic studies specically designed to measure learning effectiveness of online instructional practices rather than simply describing various dynamics observed in the online classroom. They further argue: While recent research literature denes online delivery systems, few studies actually focus on instruction and learning online. Many studies point to student preferences, faculty satisfaction, and student motivation as primary delivery system determinants. To assess delivery system models, new research is needed that measures impact on academic success and thinking skills. Additional research might focus on plausible learner outcomes related to delivery system variables to test learning theories and models of teaching in the design of online courses. (p. 117). The self-reporting measure of perceived learning offered by the CAP Perceived Learning Scale can be used as part of a systematic research agenda studying the effectiveness of different online learning theories, techniques, and models. The CAP Perceived Learning Scale also provides a counter to the concern raised by Tallent-Runnels et al. (2006) that many online learning studies use single-item measures of key variables by providing researchers with a valid and reliable multiitem psychometric instrument that can be used for empirical research. By design, the CAP Perceived Learning Scale can be used across various disciplines, thus enabling learning comparisons without the limitations associated with course-specic assessments or nal grades. When combined with existing classroom environment and psychosocial instruments, such as the Constructivist On-Line Learning Environment Survey (Taylor & Maor, 2000), the Distance Education Learning Environments Survey (Walker, 2003), or the Community of Inquiry framework (Garrison & Arbaugh, 2007), the CAP Perceived Learning Scale can be used to connect learning effectiveness with specic educational practices. Perhaps, for example, certain practices increase affect but have little cognitive or psychomotor benet. By considering learning across these three domains, the CAP Perceived Learning Scale opens up new research opportunities that can then be used to improve teaching and learning practices.

One of the limitations of using a self-report instrument to measure learning is the potential conation of factors in the student's view of the educational experience, such as cognitive and affective learning. While the skills-oriented nature of the psychomotor domain is a relatively straightforward consideration, it may be difcult for students to differentiate their cognitive learning from their affective perceptions, particularly when they are in the process of completing a course. It is not uncommon for faculty to hear students complain about the difculty of a course at the end of a term, only to change their opinion later in the program when they realized how much content they learned and retained. This may help explain some of the cross-loading and intercorrelations evidenced in a few cognitive and affective items, although the dominance of single-item and Learning Loss measures of cognitive learning (cf., Chesebro & McCroskey, 2000) reect both a research need for and empirical condence in such self-report measures. Additional research conducted with the CAP Perceived Learning Scale across a wider variety of disciplines (particularly beyond education majors) will provide additional data that can be used to further validate the scale or provide the opportunity for iterative improvement. Since the CAP Perceived Learning Scale was developed and tested with students enrolled in both online and campus courses, it has utility across the entire delivery spectrum from fully online and blended courses to Web-enhanced and fully face-to-face instruction. Additionally, a Flesch Kincaid grade level score of 7.5 suggests that the CAP Perceived Learning Scale can be used by a wide variety of student populations. Given the rapid growth in blended learning (Picciano & Dziuban, 2007) in particular, this resource enables researchers to tease out the specic pedagogical value of in-person instruction and teaching approaches on student learning. For example, a move from fully online to blended courses may demonstrate increased cognitive learning or it may instead increase affective learning without signicantly improving cognitive learning. As long as effectiveness research is limited to more blunt instrumentation, such nuances will remain hidden, and so it is hoped that the CAP Perceived Learning Scale will serve to promote such detailed research. Similarly, different technologies may be better suited to different applications within the educational endeavor. For all of the benets of the Internet, for example, there are many valuable characteristics of print. While we harness some of these in online courses (since the written word is a signicant component of most online courses), there may be times that books are more appropriate for instructional communication than web sites. As a counterpoint, the majority of asynchronous online courses depend heavily on text a decision often rooted in faculty technological skill (or lack thereof) and bandwidth limitations rather than a deliberate pedagogical choice. Ultimately, the benets of these technologies will depend on how well they promote student learning. Instructors, whether they teach faceto-face, blended, or online classes, should consider the characteristics of various technologies and select those that best suit the teaching and learning process under consideration. The CAP Perceived Learning Scale will enable researchers to understand better the complex relationships between technology and media selection, instructional design, pedagogical practices, and student learning, and thus promote the development of more effective educational environments. 6. Conclusion Data presented here provide evidence that the 9-item CAP Perceived Learning Scale is a valid and reliable instrument to measure perceived cognitive, affective, and psychomotor learning in online and face-to-face higher education courses. The resulting CAP Perceived Learning Scale is a useful psychometric test which can be used in future research to understand the effectiveness of learning theories, models, and techniques in a variety of educational environments. Although the instrument was tested in courses at two universities, the sample was weighted heavily with education majors and so caution should be exercised when generalizing perceived learning scores to

12

A.P. Rovai et al. / Internet and Higher Education 12 (2009) 713

students in other disciplines. Additional research conducted using the CAP Perceived Learning Scale in other subject areas and at multiple institutions could be used to conrm the reliability of the instrument across all sampled populations. The readability of the instrument suggests that it could also be applied outside of university settings, although caution should be exercised when administering the instrument to high schoolers and younger students; this self-report instrument presupposes a sufcient level of educational experience, maturity, and self-reection to adequately judge personal learning experiences. Appendix A. CAP Perceived Learning Scale Directions: A number of statements that students have used to describe their learning appear below. Some statements are positively worded and others are negatively worded. Carefully read each statement and then place an X in the appropriate column to the right of each statement to indicate how much you agree with the statement, where lower numbers reect less agreement and higher numbers reect more agreement. There is no right or wrong response to each statement and your course grade will not be inuenced by how you respond. Do not spend too much time on any one statement but give the response that seems to best describe the extent of your learning. It is important that you respond to all statements.
Using the scale to the right, please respond to each statement below as it specically relates to your experience in this course. 1. I can organize course material into a logical structure. 2. I cannot produce a course study guide for future students. 3. I am able to use physical skills learned in this course outside of class. 4. I have changed my attitudes about the course subject matter as a result of this course. 5. I can intelligently critique the texts used in this course. 6. I feel more self-reliant as the result of the content learned in this course. 7. I have not expanded my physical skills as a result of this course. 8. I can demonstrate to others the physical skills learned in this course. 9. I feel that I am a more sophisticated thinker as a result of this course. Not at all Very much so

References
Andersen, J. F. (1979). Teacher immediacy as a predictor of teaching effectiveness. Communication Yearbook, 3, 543559. Anderson, L., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom's taxonomy of educational objectives. New York: Longman. Bem, D. (1972). Self-perception theory. In L. Berkowitz (Ed.), Advances in experimental social psychology, Vol. 6 (pp. 162). New York: Academic Press. Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., et al. (2004). How does distance education compare to classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379439. Bloom, B. S. (1956). Taxonomy of educational objectives. Handbook 1: Cognitive domain. New York: David McKay. Bloom, B. S., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classication of educational goals, by a committee of college and university examiners. Handbook I: Cognitive domain. New York: Longman. Chesebro, J., & McCroskey, J. C. (2000). The relationship between students' reports of learning and their actual recall of lecture material: A validity test. Communication Education, 49, 297301. Corrallo, S. (1994). A preliminary study of the feasibility and utility for national policy of instructional good practice indicators in undergraduate education (contractor report). National Center for Higher Education Management Systems, U. S. Department of Education, National Center for Education Statistics (pp. 94437). Dumont, R. (1996). Teaching and learning in cyberspace. IEEE Transactions on Professional Communications, 39(4), 192204. Frymier, A. B., & Houser, M. (1999). The revised learning indicators scale. Communication Studies, 50, 112. Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. The Internet and Higher Education, 10(3), 157172. Gorham, J. (1988). The relationship between verbal teacher immediacy behaviors and student learning. Communication Education, 37, 4053. Hiltz, S. R., & Wellman, B. (1997). Asynchronous learning networks as a virtual classroom. Communications of ACM, 40(9), 4449. Kearney, P. (1994). Affective learning scale. In R. B. RubinP. Palmgreen& H. E. Sypher (Eds.), Communication research measures: A sourcebook. New York: The Guilford Press (pp. 8185 and pp. 238241). Kearney, P., & McCroskey, J. C. (1980). Relationships among teacher communication style, trait and state communication apprehension, and teacher effectiveness. In D. Nimmo (Ed.), Communication yearbook 4 (pp. 533551). New Brunswick, NJ: Transaction Books. Kearney, P., Plax, T. G., & Wendt-Wasco, N. G. (1985). Teacher immediacy for affective learning in divergent college classes. Communication Quarterly, 33, 6174. Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1964). Taxonomy of educational objectives: The classication of educational goals. Handbook II: Affective domain. New York: David McKay. McCroskey, J. C., & Richmond, V. P. (1992). The cognitive learning problem. In V. P. Richmond & J. C. McCroskey (Eds.), Power in the classroom: Communication, control, and concern (pp. 106108). Hillsdale, NJ: Lawrence Erlbaum Associates. McCroskey, J. C., Richmond, V. P., Plax, T. G., & Kearney, P. (1985). Power in the classroom V: Behavior alteration techniques, communication training, and learning. Communication Education, 34, 214226. McCroskey, J. C., Sallinen, A., Fayer, J. M., Richmond, V. P., & Barraclough, R. A. (1996). Nonverbal immediacy and cognitive learning: A cross cultural investigation. Communication Education, 45, 200292. Pace, C. R. (1990). The undergraduates: A report of their activities and progress in college in the 1980's. Los Angeles: Center for the Study of Evaluation, University of California, Los Angeles. Picciano, A. G.& Dziuban, C. D. (Eds.). (2007). Blended learning: Research perspectives. Needham, MA: The Sloan Consortium. Plax, T. G., Kearney, P., McCroskey, J. C., & Richmond, V. P. (1986). Power in the classroom: VI. Verbal control strategies, nonverbal immediacy and affective learning. Communication Education, 35, 4355. Richmond, V. P. (1990). Communication in the classroom: Power and motivation. Communication Education, 39, 181195. Richmond, V. P., Gorham, J. S., & McCroskey, J. C. (1987). The relationship between selected immediacy behaviors and cognitive learning. In M. A. McLaughlin (Ed.), Communication yearbook, Vol. l0 (pp. 574590). Newbury Park, CA: Sage. Roach, K. D. (1994). Temporal patterns and effects of perceived instructor compliancegaining use. Communication Education, 43, 236245. Rodriguez, J., Plax, T. G., & Kearney, P. (1996). Clarifying the relationship between teacher nonverbal immediacy and student cognitive learning: Affective learning as the central causal mediator. Communication Education, 45, 293305. Russell, T. L. (1999). The no signicant difference phenomenon: A comparative research annotated bibliography on technology for distance education. Raleigh, NC: Ofce of Instructional Telecommunications, North Carolina State University. Scott, M. D., & Wheeless, L. R. (1975). Communication apprehension, student attitudes, and levels of satisfaction. Western Journal of Speech Communication, 41, 188198. Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational Researcher, 27(2), 413. Simpson, E. J. (1974). The classication of educational objectives in the psychomotor domain. In R. J. KiblerD. J. CegalaL. L. Barker& D. T. Miles (Eds.), Objectives for instruction and evaluation (pp. 107112). Boston: Allyn and Bacon. Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahern, T. C., Shaw, S. M., et al. (2006). Teaching courses online: A review of the research. Review of Educational Research, 76(1), 93135.

Appendix B. CAP Perceived Learning Scale scoring key Total CAP score Score the test instrument items as follows: Items 1, 3, 4, 5, 6, 8, and 9 are directly scored; use the scores as given on the Likert scale, i.e., 0, 1, 2, 3, 4, 5, or 6. Items 2 and 7 are inversely scored; transform the Likert scale responses as follows: 0 = 6, 1 = 5, 2 = 3, 3 = 3, 4 = 2, 5 = 1, and 6 = 0. Add the scores of all 9 items to obtain the total CAP score. Scores can vary from a maximum of 54 to a minimum of 0. Interpret higher CAP scores as higher perceptions of total learning. CAP subscale scores Add the scores of the items as shown below to obtain subscale scores. Scores can vary from a maximum of 18 to a minimum of 0 for each subscale. Cognitive subscale: Add the scores of items 1, 2, and 5. Affective subscale: Add the scores of items 4, 6, and 9. Psychomotor subscale: Add the scores of items 3, 7, and 8.

A.P. Rovai et al. / Internet and Higher Education 12 (2009) 713 Taylor, P. C., & Maor, D. (2000). Assessing the efcacy of online teaching with the constructivist on-line learning environment survey. Proceedings from Teaching and Learning Forum 2000 Retrieved December 30, 2007, from http://lsn.curtin.edu.au/ tlf/tlf2000/taylor.html Walker, S.L. (2003). Distance education learning environments research: A short history of a new direction in psychosocial learning environments. Paper presented at the Eighth Annual Teaching in Community Colleges Online Conference,

13

Honolulu, HI. Retrieved December 30, 2007, from http://makahiki.kcc.hawaii.edu/ tcc/2003/conference/presentations/walker.html Zechmeister, E. B., Rusch, K. M., & Markell, K. A. (1986). Training college students to assess accurately what they know and don't know. Human Learning Journal of Practical Research and Application, 5(1), 319.

You might also like