You are on page 1of 172

Dissertation

A Conceptual framework to evaluate the e-learning from the student perspective

Abstract

This research paper investigated the development of an eLearning facility

evaluation framework from a student perspective. The research employed a mixed design

in which data collection was through the use of both primary and secondary sources. The

findings of the research indicate that is a strong positive correlation between eLearning

evaluation and the level of students satisfaction, some of the factors that should be

considered in undertaking eLearning evaluation include the following; Individual learner

variables, Learning environment variables, Contextual variables, Usability and

technological factors, Pedagogical variables and Security variables. Additionally, the

results of study indicate that most higher education institutes employ the use of students

surveys and inbuilt data analytics tools that are used to measure user profile information

and usage. Also, the result of the study indicates that some ways through which higher

education institutes can minimize the costs incurred in undertaking eLearning evaluation

include the following; Undertaking effective eLearning planning and control process, use

of evaluation methods that covers all aspects of effective eLearning, development of

effective evaluation objectives, incorporating the relevant stakeholders in the evaluation,

and undertaking constant evaluation for improvement.


Dissertation

Acknowledgements
Dissertation

Table of Contents

1.1 Background information........................................................................................................................1


1.2 Brief description of e-learning...............................................................................................................2
1.3 Need for evaluation of e-learning systems............................................................................................5
1.4 Research problem..................................................................................................................................7
1.5 Research Questions................................................................................................................................8
1.6 Research aim..........................................................................................................................................9
1.7 Research Objectives:.............................................................................................................................9
1.8 Research Methodology..........................................................................................................................9
1.9 Significance of study...........................................................................................................................10
1.10 Limitation...........................................................................................................................................11
2.0 CHAPTER TWO: LITERATURE REVIEW...........................................................................................13
2.1 Introduction to e-learning....................................................................................................................13
2.2 History of Learning..............................................................................................................................13
2.3 Advantages of eLearning.....................................................................................................................14
2.4 The benefits and drawbacks of e-learning...........................................................................................17
2.4.1 Benefits........................................................................................................................................17
2.4.2 Drawbacks....................................................................................................................................20
2.5 Stages in the Development of Learning Technology...........................................................................22
2.6 concept of SDQ....................................................................................................................................23
2.6.1 Relationship between teacher and infrastructure service........................................................25
2.6.2 Relationship between student and infrastructure service........................................................26
2.6.3 Relationship between University support and infrastructure service.....................................26
2.6.4 Relationship between system quality and information quality....................................................27
2.6.5 Relationship between system quality, information quality and e-learning SDQ.........................27
2.7 evaluations of e-learning facilities.......................................................................................................29
2.7.1 Tools and instruments for evaluation of e-learning.....................................................................31
2.8 Ways through which costs of evaluating e-learning facilities can be reduced....................................40
2. 9 How an e-learning facility can be evaluated from a student perspective...........................................42
3.0 CHAPTER THREE: METHODOLOGY.................................................................................................53
3.1 Introduction..........................................................................................................................................53
3.1 Research tradition................................................................................................................................54
3.2 Research setting...................................................................................................................................55
3.3 Research philosophy............................................................................................................................55
3.4 Research trustworthiness.....................................................................................................................57
3.4.1 Credibility....................................................................................................................................58
3.4.2 Transferability..............................................................................................................................58
3.4.3 Dependability....................................................................................................................................59
3.4.4 Conformability.............................................................................................................................60
3.5 Research delimitation..........................................................................................................................61
3.6 Participants (population and sample)...................................................................................................62
3.7 Instruments..........................................................................................................................................63
3.7.1 Questionnaires..............................................................................................................................64
3.7.2 Interviews.....................................................................................................................................66
3.7.3 Secondary data sources................................................................................................................68
3.8 Questionnaire reliability and validity..................................................................................................69
3.9 Design..................................................................................................................................................71
3.10 Data Analysis.....................................................................................................................................72
3.10 Ethical considerations........................................................................................................................73
4.0 CHAPTER FOUR: SUMMARY OF QUANTITATIVE RESULTS.......................................................74
5.0 CHAPTER FIVE: DISCUSION OF RESULTS....................................................................................105
6.0 Conclusion and Recommendations........................................................................................................168
Dissertation

1.0 CHAPTER ONE: INTRODUCTION

1.1 Background information

There is a growing movement towards designing electronic learning environments

that recognize the communicative powers of the Internet to support an active and

constructive role for learners (Oliver &Omari, 1999; Trinidad &Albon, 2002). This

research is an effort to model such modern ways of assessment of e-learning facilities in

higher education institutions through a case study. There are many factors that influence

the learning experience such as the infrastructure, the quality of content and assessment,

the quality of learner support systems, the assumptions made by learners and educators

about the learning experience itself, the educational design and peer support networks for

learners and educators (Trinidad, Fisher & Aldridge, 2003). Considering the complexity

of these factors can have on the learning experience it becomes imperative to know how

these factors are influenced while learning through an e-learning facility. Such knowledge

can be gained through development of appropriate evaluation models keeping in view the

need to ultimately assist the students to learn in a supported and effective learning

environment through such evaluation models. The evaluation should aim at stimulating

the institutions to address the various issues involved in the student learning process.

While some researchers feel that the e-learning is still in its infant stage, a growing

number of others are emphasizing on the need to develop models that will provide ways

to evaluate e-learning systems that provide an in-depth information on the quality of e-

learning education being provided by higher education institutions and the effect of such

quality on the learning process of students (Ardito, Costabile, De Marsico, Lanzilotti,

Levialdi, Roselli&Rossano, 2006).

1
Dissertation

1.2 Brief description of e-learning

E-learning is the development of knowledge and skills through the use of

information and communication technologies (ICTs) to support interactions for learning

interactions With content, with learning activities and tools, and with other people

(Wilson & Stacey, 2004, p. 35). A variety of broad terms related to the use of technology

in education were used. These included electronic learning, distance education, distance

learning, online instruction, multimedia instruction, online courses, web-based learning,

virtual classrooms, computer mediated communication, computer-based instruction,

computer-assisted instruction, technology uses in education, telemedicine, and e-health.

Electronic learning resources have included multimedia, integrated learning

systems, Web content, or digital text. Traditionally, teachers used these resources as

additional, Separate supports to classroom instruction (e.g., watching a video about World

War II in a history class). What we are seeing now, however, is the development of new,

much more complex e-learning resources that address the breadth of classroom curricular

and instructional needs. These resources often use the Internet and integrate multimedia,

Data collection and Web content into complete packages that teachers can use to support

student achievement. In fact, some predict that e-learning resources will eventually

replace traditional textbooks (Mumtaz, 2000).

Attitudes towards e-learning, reflected by scholarly and academic reviews, range

from neutral to positive. On one hand, it is noted that e-learning (e.g., DE, CAI, etc.) is at

least as effective as traditional instructional strategies (Rosenberg, Grad, &Matear, 2003),

and that there are no major differences in academic performance between the more

traditional and more technology-oriented modes of instruction. On the other hand, many

2
Dissertation

reviews go further, reflecting a particularly positive attitude towards the impact of e-

learning (Mayer & Moreno, 2003). Benefits include offering a variety of new possibilities

to learners (Breuleux, Laferrire, &Lamon, 2002), in addition to having a positive effect

on students achievement in different subject matter areas (Soe, Koki, & Chang, 2000;

Christmann&Badgett, 2003).

Recent surveys indicate that students in Canada and the United States are

enjoying unprecedented access to computers at school. There is one computer for every

six high school students in Canada; in the United States there is one computer for every

five students. Both countries are well above the average within member nations of the

Organization for Economic Co-operation and Development of 1 computer for every 13

students (Van Dijk, 2006). The challenge to provide physical access to computers is

rapidly being met. However, access to computers is not translating into equivalent use:

students and their teachers are not, so far, capitalizing on the physical investment.

Internet connectivity is far from complete: 80% of Canadas school computers are

connected to the World Wide Web, compared with only 39% of those in the United States

(Van Dijk, 2006).

The need for computer literacy training for teachers, installation and technical

support staff, physical wiring costs, network provider fees, and adherence to network

security requirements all impose costs and barriers to greater online access. Although the

rate of growth in Internet use from home, school, and work is leveling off, the volume of

users and the range of purposes for which the Internet is being used are sure signs that

electronic communication, electronic commerce, electronic education (e-learning), and

the World Wide Web are here to stay. Todays kids play games on computers, and they

3
Dissertation

play those games interactively over the Internet. They have never known a world without

computers and many have never known a world without the Internet (Callister&Burbules,

1990).

Now, schools, educational institutions, and teachers must find ways to integrate

credible, relevant e-learning opportunities into the classroom so that the explosion of

electronic information does not leave our studentsand usbehind. Neglecting to

incorporate into our classrooms this communication tool and interactive source of

information that is so much a part of North American life is like teaching teens how to

drive without ever letting them into a car (Surma, Geise, Lehman, Beasley & Palmer,

2012).

However, reviews also acknowledge the need to address more closely design

issues in e-learning courses and activities. Developing effective strategies for teaching

and learning is also called for (Meredith & Newton, 2004; Oliver & Herrington, 2003).

Addressing learners needs in the design of e-learning activities is suggested by some

reviews (Ewing-Taylor, 1999; Crawford, Gannon-Cook, &Rudnicki, 2002). Wherever the

implementation issues are addressed, there seems to be a consensus among reviewers that

effective use of e-learning requires the presence of immediate, extensive, and sustained

support (Sclater, Sicoly, &Grenier, 2005). Nevertheless, reviews report a major concern

regarding the absence of strong empirical evidence to support the use of e-learning

(Whelan &Plass, 2002; Torgerson&Elbourne, 2002, Urquhart, Chambers, Connor, Lewis,

Murphy, Roberts et al. 2002).

One review considered the quality of research to be inadequate and called for

more scientific rigour and less reliance on anecdotal evidence. Another review

4
Dissertation

emphasized that advances in DE technologies are outpacing research on their

effectiveness (Hirumi, 2002). An extra obstacle facing the advancement of research in the

field seems to be the fact that elearning researchers are not all uniform in the methods

used and questions asked (Cantoni&Rega, 2004). The aforementioned discussion brings

into focus the need to develop standard assessment systems of e-learning facilities to

identifies ways to bring some order to the definitions of e-learning and the various

methods through which is e-learning is provided.

1.3 Need for evaluation of e-learning systems

A cursory look at the educational advantages that e-learning resources have over

traditional print resources, like textbooks will set the basis for identifying the need for

evaluating e-learning facilities. Teachers are finding them more flexible for teaching and

learning. Electronic media (print, video, audio, software, and systems) are being

integrated to create a more dynamic learning experience for children and better

instructional support for teachers. Electronic resources, especially those delivered via the

Internet, will be very flexible, up-to-date, and easy for teachers to use in their classrooms

(Ozkan&Koseler, 2009).

Evaluating e-learning is expensive. It involves extensive training of evaluators,

development of evaluation criteria, and establishment of guidelines. Using external

evaluations also has a cost attached since the variation of standards, of adherence to

standards, and of evaluation methods and criteria means time and effort in choosing the

evaluator. If educators lack confidence in the agency conducting evaluations, or the

agencys goals differ, educators must assess the evaluation for suitability and fit with

5
Dissertation

local educational needs, objectives, and curriculum standards. Saving time and money

requires a careful choice (Ardito et al, 2006).

The range of approaches to the evaluation of e-learning resources is quite broad.

There are state-funded evaluations, like those conducted by the California Learning

Resource Network, in which the review criteria are specific to California law and

curriculum standards, and by North Carolinas EvaluTech, in which the evaluation criteria

are relatively detailed but are not currently aligned with state student academic standards.

There are national consortiums in the United States, like the Gateway to Educational

Materials (www.thegateway.org) that attempt to recommend those resources that are free

of stereotyping, bias, and social inequity (Ozkan&Koseler, 2009).

The right third-party external review and assessment option can save schools and

other institutions of education hundreds of hours and untold sums of money, and it can

provide protection from potentially embarrassing and problematic errors. Avoiding the

wrong or inappropriate resource, or quickly finding the right one, is just what educators

need as they struggle to do more with less. However whether these evaluation methods

are effective and assist learners efficiently is a question that needs to be addressed.

E-learning is still evolving and there is no consensus amongst researchers on the method

of evaluation that could be uniformly applied to different educational programs and

institutions (Ardito et al, 2006).

6
Dissertation

1.4 Research problem

A brief literature review shows that e-learning is fast catching up as an important

medium of learning. In her article E-Learning in Three Steps, Kathryn Barker opined

that the development and implementation of e-learning isnt optional (DeRouin,

Fritzsche& Salas, 2004, p.6). With the growing demand for e-learning all over North

Americaup 125% in one Canadian province alone over a 2-year periodschool boards

everywhere will be or are already seeking ways to deliver quality, reliable, efficient and

effective e-learning opportunities. The question then, is not whether to provide e-learning

opportunities, but how and at what cost. The goal is to use electronic media to support

students in their daily practice, in classrooms, or over the Internet so they can better

learning experience. But how easy will it be for the ordinary student with basic computer

skills (like word processing and e-mailing) to adapt to and use new electronic resources?

As with any new curriculum, students will need appropriate training, but an effective e-

learning resource should be student friendly and easy to incorporate in the classroom

setting or in distance education.

Any effective learning resource must respect the fundamental values of learning.

How will students be able to identify the best resource for their classrooms? How will

teachers know what kind of facilities should be used to assist the students? How will

institutional business officials know what purchases of e-learning facilities to approve?

The tools and strategies already used to make good purchasing choices will work here in

theory, and they can be used in one of two ways: to evaluate these new, complex

resources on ones own or to take advantage of evaluations by educators in other districts,

agencies, or consortiums across the continent. However whether these methods will work

7
Dissertation

is a question that needs to be addressed. Keeping in view the above discussions the

following research questions are set for this research (Devedi, Jovanovi&Gaevi,

2007).

Two of the most important components of any effective resource are curriculum

correlation and inclusiveness. Teachers want resources that support the curriculum and

advance the learning of their students according to the relevant standards.

1.5 Research Questions

1. In the absence of a uniform standard required evaluating an e-learning facility what

methods could be used in higher educational institutions?

2. With increasing focus on the assistance to be provided to learners using e-learning

facilities what are areas that need to be focused while developing the evaluation

mechanism that can directly benefit the learning needs of the students?

3. What ways can be used to minimize the expense on the evaluation process but at the

same time make it successful?

4. Are there are models that have been developed for evaluating the e-learning facilities

and if there are no standard models is it possible to develop an evaluating mechanism that

could be generalized?

Considering the inconsistency seen in the literature with regard to availability of

standard evaluating mechanisms, these research questions are expected to provide

answers that will stimulate the creation of new ideas in developing new models for

evaluation of e-learning facilities.

8
Dissertation

1.6 Research aim

To investigate the various methods of e-learning evaluations that can be employed

by higher education institutes in order to develop a conceptual model for evaluating an e-

learning facility from the students perspective

1.7 Research Objectives:

1. Identify various practices followed by institutions in evaluating e-learning

facilities and determine the important attributes of an evaluation mechanism from

the students perspective.

2. Develop areas of focus that could be used for the evaluation of e-learning

facilities.

3. Identify ways of minimizing costs of evaluating the e-learning facilities.

4. Develop a conceptual model for evaluating an e-learning facility from the

students perspective.

1.8 Research Methodology

E-learning is a field that concerns the learners, teachers, institutional managers

and other stakeholders. A review of the literature shows that many researchers have used

a combination of both quantitative and qualitative research methodologies to conduct

empirical study. This research is expected to use a combination of both quantitative and

qualitative methodologies. Quantitative data are expected to provide the much needed

objectivity in developing the conceptual model. Qualitative data are expected to provide

in-depth knowledge on the students perspective towards e-learning. Triangulation will be

used to derive final outcomes from the data analysis. The focus of this study is on

qualitative data due to the use of case study method. However quantitative data will be

9
Dissertation

needed to make the study more complete by integrating objectivity into the study.

Quantitative data was collected through the use of questionnaires in which students were

required to fill in a semi-structured questionnaire in order to analyze their responses

regarding the research questions. On the other hand qualitative data will be obtained from

undertaking interviews with some students in order to determine their responses

regarding interview questions (Heppner, Wampold, Owen, Thompson & Wang, 2015).

1.9 Significance of study

Researchers concur that there are no standardized methods that have been found

in the literature to evaluate the e-learning facilities, especially from the students

perspective. With increasing proliferation of e-learning across the world, there is a

growing need felt across the spectrum of researchers to evaluate the e-learning facilities

in a standard manner. The standardization is expected to enable the achievement of

consistent outcome from the evaluation. A model of evaluation if generalized could be

used by many institutions with reasonably assured outcomes. Additionally till date the e-

learning facility evaluation has not been looked at solely from the perspective of students.

There is a tremendous need for evaluating the e-learning facilities to enable understand

the needs of the students in the e-learning efforts especially keeping in view changing

technologies, cost and time as parameters. This research is expected to fill this important

gap found in the literature. Undertaking the study was of vital importance in the sense

that it gave an insight in understanding the various e-learning evaluation methods from a

student perspective. Such the above research construct enables the effective use of e-

learning facilities in order to promote effective learning by students. Additionally,

10
Dissertation

undertaking the research enabled higher education institutes to adopt cost effective e-

learning strategies and evaluation methods.

1.10 Limitation

The major limitations that were encountered in undertaking of the study were time

limitations and limitations associated with the research findings. Time limitations were

depicted in the sense that, the research paper was undertaken in less than twelve weeks

which presented a major limitation in time. Additionally, the research findings were

limited evident in the sense that the research solely concentrated on collecting data from

150 participants from a single higher education institute. This presents a small population

sample given the high number of students in various higher education institutes that

employ the use of eLearning. Additionally, other limitations that were encountered in the

process of undertaking the study included the following limitations; time constraints,

non-cooperative respondents, ignorance, and hostility (Monsen et al, 2008). Time

constraints were encountered in cases where some respondents delayed with their

questionnaires (Vithal& Jansen, 1997). Regarding non-cooperative participants, some

respondents were not willing to corporate in availing the required research information.

In such a case, the researcher was forced to employ a lot of tactics to counteract the

effects of non-cooperation among participants (Mcdaniel& Gates, 1998). Also, another

limitation that was encountered was hostility. Hostility was encountered in the sense that

some information sources and participants appeared hostile and not ready to give out the

desired information. Lastly, ignorance was another limitation that was encountered in the

undertaking of the research in the sense that some respondents did not effectively

11
Dissertation

understand the main ideas in the research question. Hence; it was difficult to obtain the

relevant information from such respondents regarding the key impacts the research

questions (Monsen et al, 2008).

12
Dissertation

2.0 CHAPTER TWO: LITERATURE REVIEW

2.1 Introduction to e-learning

The concept of e-learning has been in existence for many years but it was first

employed in 1999 at a CBT systems seminar. During this eve other terms also came to

light in search of accurate description such as online learning and virtual learning.

The principles behind e-learning have been well documented throughout history however

with evidence suggesting that early forms of electronic learning existing as far back as

the 19th century. Many definitions have come up with attempt to try to define e-learning

with the most basic but conceptual of them being a computer based educational tool or

system that facilitates the learning process. E-learning has been integrated in many

education institutions and education programs thus facilitating a gradual transformation

from traditional ways of learning to electronic environment. The above analysis implies

that eLearningis a process that didnt emerged in a fort night but it can be considered

as a revolutionary process that started with generation of computers (Sangr,

Vlachopoulos& Cabrera, 2012).

2.2 History of Learning

The most conventional form of learning from way back has always been the use

of Traditional learning. Traditional learning is an ancient method of learning which was

conducted by means of a tutor congregating students in places such as classes, labs or

seminars to study. While they were gathered at these places they would then be given

trainings about different subjects. This method of learning has been practiced worldwide

13
Dissertation

in all levels of education such as kindergartens, primary, secondary, high schools and

tertiary institutions. Cox (2013) states in his publication that the Traditional learning

environment incorporated use of teachers and professors who employed various teaching

styles with the most popular traditional teaching style being teaching by narration.

Traditional learning method similar to any other method has its own advantages and

disadvantages that are more or less similar in many cultures.

Like any other methods of learning, it has its strengths and weaknesses. One of

the weaknesses of traditional learning arises from the number of students in a single

class. This is in that the numbers of students in the classrooms influence the

performance of individual students. In 1960 Chant Royal Commission on Education

(Byun&Loh, 2015). In British Columbia reported that the size of the public school

classrooms and teacher ratios which were referred to in a number of briefings

categorically linked and supported the view that class sizes should be reduced in order to

generate desirable results in performance.

2.3 Advantages of eLearning

E-learning, otherwise known as online Training makes use of videos and audio in

training sessions. These materials can also be given to the trainees for their own

use. This will contribute to cut in the cost of trainer as the session can be recorded

and no more facilities will be required.


It will also be advantageous as the trainee can learn from anywhere, with no need

to travel.

14
Dissertation

By Online Training one cant create the training environment as it can be

done from any location, but in many cases where training environment is not so

much important, it works perfectly.


In Online Training you dont need any hard copy of training material. This is

because the recorded videos and audios work as the training material. Apart from

this, if a teacher prefers to offer any other notes or documents, they can make use

of PDF or word documents which they would upload without paying any extra

cost.
There is no extra maintenance cost in offering e-learning.

One of the great features of the Learning Management Systems is that you can

check the performance of the trainee and it doesnt costs anything.

E-learning can therefore be described as a computer based educational resource or

system which enables a person to learn from anywhere, and at any given time. Today e-

learning is mostly delivered though the internet, although in the past it was delivered

using a blend of computer-based methods like use of CD-ROM. With the recent

advancement in technology, geographical gap has been bridged with the use of tools that

make you feel as if you are inside the classroom. E-learning offers the ability to share

material in all kinds of formats such as videos, slideshows, word documents and PDFs.

Conducting of webinars (live online classes) and communication with professors via

chat and message forums is also an option that is available to users (Sarwar, Ketavan&

Butt, 2015).

There are a large number of different e-learning systems (commonly known as

Learning Management Systems) and methods, which allows for courses to be delivered.

With the right tool various processes can be automated such as the marking of

15
Dissertation

tests or the creation of engaging content. E-learning provides the learners with the

ability to fit their learning schedules around their lifestyles, effectively. Studies by

Chimalakonda, S. (2010) and Guri-Rosenblit and Gros (2011) reiterates that the idea of

eLearning evolved from distance education, and is still struggling to gain full recognition

and accreditation within mainstream education as an approach for high quality provision.

While developments in eLearning have been exciting and beneficial, finding ways

of enhancing the quality of provision and effectiveness have posed a serious challenge. In

response to this concern of legitimacy, value and quality of online programs, Davies et al.

(2011) developed a model that provided a comprehensive conceptual framework which

strives to identify factors that enhance the quality of fully-online degree programs. The

argument of Kimber and Catherine Ehrich (2011) that globalization, transnational

provision of higher education and the use of market mechanisms have increased the

complexity in issues of accountability, authority, and responsibility in performing quality

checks.

The growth of the internet and its impact on education system has created a new

learning model called e-learning that is considered as a new revolution in the world of

education. Guri-Rosenblit and Gros (2011) describes E-learning as the type of learning

where people pursue professional or educational courses without the use of

traditional learning methods. Such involves taking a course or going to school

remotely by making use of the web as a classroom. According to Jurado, Redondo

&Ortega (2012) for the purposes of study electronic learning refers to the delivery of

educational material via any electronic media such as internet, intranet, extranets, satellite

broadcast, audio or video tape, CDs and computer-based training. E-Learning is currently

16
Dissertation

one of the popular models of learning. Like any other means it has its own advantages

and disadvantages with the most important advantages being that participants can access

programs anywhere at any time compared to the traditional learning students who are

bound by time and location.

Many important developments have occurred in education since the invention of

the internet. For instance, many learners in this present era are well versed in the use of

computers/ laptops, smartphones, increased use of text messaging platforms and using the

internet. This has made participation in and running of online course to become a simple.

Messaging, social media and various other means of online communications functions

to allow learners to keep in touch and discuss course related matters, while providing

for a sense of coherence. In overall, traditional learning turns out to be expensive as it

takes a long time and the results can vary. E-learning offers an alternative that is user

friendly, faster, cheaper and potentially better (Guri-Rosenblit and Gros, 2011).

2.4 The benefits and drawbacks of e-learning

2.4.1 Benefits

E-learning offers great outcomes when it comes to benefits which make the

creation and delivery process seemingly easier and hassle-free. Some of the important

benefits are as listed below:

It has No Boundaries or Restrictions

Locational restrictions and time factor is one of the issues that learners and

teachers have to face in learning. In the case of face-to-face learning, aspect of

location limits attendance to a group of learners who have the ability to participate in

17
Dissertation

the area, and in the case of time, it limits the crowd to those who can attend at a specific

time. E-learning on the other hand facilitates learning without having to organize

when and where everyone who is interested in a course can be present (Steimle,

gurevych&Mhlhuser, 2007).

It is more Fun

Designing a course in a way that makes it interactive and fun through the

use of multimedia enhances not only your engagement factor, but also the relative

lifetime of the course material in question. It also enhances the concentration of the

students and better understanding of the course being taught (Lytras, Poiloudi&Korfiatis,

2003).

It is cost effective

This applies to both tutors and students as they dont need to pay much to acquire

updated versions of textbooks for schools or colleges. While textbooks often become

obsolete after a certain period of time, the need to constantly acquire new editions

is not present in e-learning (Lytras, Poiloudi&Korfiatis, 2003).

It fits to any scenario

As companies and organizations adopt technologies to improve the efficiency

of day-to-day operations, the use of the internet becomes a necessity. As

multinational corporations expand across the globe, the chances of working with

people from other countries also increases, and training all those parties together is an

issue that e-learning can successfully addresses (Lytras, Poiloudi&Korfiatis, 2003).

18
Dissertation

2.4.2 Drawbacks

Although e-Learning concept is becoming more widely spread for education and

training many online courses are still poorly designed. Some are little more than

electronic versions of paper-based materials. In overall the reputation of online

courses is not good and the exception of well-designed courses that effectively teach a

topic to its target students is high (Nayak&Suesaowaluk, 2007). The most important

strengths of eLearning courses for students comes from its minimal limitation to the

time and the fact that it isnt bound to location, besides that, the number of students in

virtual classrooms is not an issue since e-leaning courses are student centered

compared to traditional learning courses that are instructor oriented. Some researchers

believe that interaction is an important aspect of learning.

Other researchers suggesting that on-line education adversely affects interaction

contributing to the lowering the quality of the educational experience (Rahm and

Reed, 1997). Further studies on-line learning indicated that dissatisfaction with online

courses resulted from feelings of isolation and lack of interaction with students and

instructors.

The weakness of e-learning courses is that they are not suitable for all subjects, it

is not comfortable for all students that are used to traditional learning, contributes

to low motivation of learners due to the lack of face to face interaction between instructor

and students. The lack of face to face interaction influences the student performance in

some universities as the e-learning courses are not fully conducted as a distance

19
Dissertation

learning, and there are some face to face session to solve students issues and

briefing of the course (Pavlov &Paneva, 2005).

20
Dissertation

2.5 Stages in the Development of Learning Technology

21
Dissertation

Figure 1: stages in the development of learning technology

22
Dissertation

2.6 concept of SDQ

Literature review shows that the concept of service quality has been well

researched although under various names. For instance Alsabawy et al. (2013) argue that

e-service quality has been investigated and measured using scales including WebQual,

SITE-QUAL, eTailQ, PIRQUAL, and e-SELFQUAL. A major deficiency in these scales

is that the service quality dimensions addressed by these scales do not take into account

the sub-dimensions of service delivery quality that influence user satisfaction of e-

learning services (Alsabawy et al. 2013). In this regard Alsabawy et al. (2013) through

their research identified some sub-determinants of e-learning SDQ. IT infrastructure

services, systems quality and information quality were the three sub-dimensions

identified by Alsabawy et al. (2012) as affecting the e-learning SDQ.

As far as SDQ itself was concerned, Alsabawy et al. (2013) identified six

endogenous variables namely efficiency, privacy, fulfilment, contact, privacy and

responsiveness. The outcomes of the research conducted by Alsabawy et al. (2013)

produced mixed results and the choice of the exogenous variables that represented the

sub-dimensions of SDQ appeared to be focusing only on the infrastructure services

whereas there are other factors identified by researchers that can play a significant role in

determining the SDQ. For instance from the literature review (Section---) it can be seen

that Selim (2007) identified teacher, student, technology and university support as critical

success factors that affect e-learning acceptance by students.

23
Dissertation

While technology as a factor identified by Selim (2007) is very similar to the

infrastructure services identified Alsabawy et al. (2013), the other factors identified by

Selim (2007) namely teacher, student and university support are different from the ones

identified by Alsabawy et al. (2013). But Selim (2007) identified factors that influence

student acceptance whereas Alsabawy et al. (2012) established a relationship between

certain critical e-learning factors and SDQ. However it can be implied that critical

success factors identified by Selim (2007) although considered as influencing student

acceptance of e-learning, such an acceptance necessarily depends on quality of service

rendered. This in turn enables the researcher to argue that the critical success factors

identified by Selim (2007) could be combined with the sub-dimensions identified by

Alsabawy et al. (2013) and linked to SDQ.

The factor technology identified by Selim (2007) is not added separately as it is

very similar to the infrastructure services identified by Alsabawy et al. (2013) and is

already part of the original model developed by Alsabawy et al. (2013) and referred

above. As far as the concept of SDQ is concerned, the scale developed by Alsabawy et al.

(2013) can be modified in order to integrate the six measures of the components of SDQ

namely efficiency, privacy, fulfilment, contact; privacy and responsiveness are integrated

into a single construct named as SDQ. Such an addition is supported by Selim (2007)

who argues that further research is needed to know the causal relationship amongst the

four critical factors namely teacher, student, technology and support. Thus one of the

causal representations that could be conceived is that technology as a factor is determined

by teacher, student and university support. In this conception technology factor is

replaced by the infrastructure services identified by Alsabawy et al. (2013). Thus the

24
Dissertation

model developed Alsabawy et al. (2013) to identify the determinants of e-learning SDQ

can be expanded to include more important determinants of e-learning SDQ which

include the following;

Teacher as a construct is related to infrastructure service

Student as a construct is related to infrastructure service

University support is related to infrastructure service

Infrastructure service is related to information quality

Infrastructure service is related to system quality

System quality is related to information quality

Information quality is related to SDQ

System quality is related to SDQ

2.6.1 Relationship between teacher and infrastructure service

Teacher as a critical e-learning success factor that influences student acceptance of

e-learning was established by Selim (2007). E-learning literature highlights the

importance of the instructor with Hillman, Willis and Gunawardena (1994) arguing that

instructional implementation of IT is the one that determines the effectiveness of e-

learning not the IT itself. Similar arguments are espoused by Webster and Hackley (1997)

who argue that instructor characteristics affect e-learning success. Thus instructor

characteristic has been identified as an important determinant of e-learning SDQ.

Further, teacher as a phenomenon has been found to be an important part of the e-

learning infrastructure by researchers (Kim and Bonk, 2006). For instance Greenhow,

25
Dissertation

Robelia and Hughes (2009) argued that e-learning infrastructure enables teachers to

planand organize the learning activities of students.

Similarly while investigating workflow-based e-Learning platform, Greenhow,

Robelia and Hughes (2009) argued that teaching sub-workflow system and infrastructure

sub-workflow system are related. These arguments enable the researcher to infer that

teacher as a construct can be related to infrastructure. As far as measuring teacher as a

phenomenon that influences SDQ, the researcher adopts the instrument developed by

Selim (2007) for this research as it has been tested for reliability and validity. Selim

(2007) measured the construct teacher using instructor characteristics.

2.6.2 Relationship between student and infrastructure service

As in the case of the phenomenon teacher, student as a critical e-learning

success factor that influences student acceptance of e-learning was established by Selim

(2007). Literature shows that some researchers (e.g. Beyth-Marom et al. 2003) conclude

that e-learning students perform better than traditional learning students implying that

students would like to use e-learning if it facilitates their learning, anywhere, anytime and

the way they like (Papp, 2000). These arguments indicate that student is an important

construct that influences the e-learning process including SDQ. Again, as in the case of

the relationship between teacher and infrastructure, Beyth-Marom et al (2003) argue that

student is an important aspect of e-learning infrastructure and e-learning workflow

respectively, providing the support to relate student as a construct to e-learning

infrastructure.

26
Dissertation

2.6.3 Relationship between University support and infrastructure service

As in the case of the phenomenon teacher and student, University support as

a critical e-learning success factor that influences student acceptance of e-learning was

established by Selim (2007). One of the concerns of researchers (Bergstedt, Wiegreffe,

Wittmann&Mller, 2003) is the failure of e-learning projects to achieve the goals

attributed due to lack of access to technical advice and support. An important component

that could eliminate this problem is the University administration support (Selim, 2007).

Thus it can be construed that University support is an important construct that influences

e-learning SDQ. In addition, literature on e-learning highlights the need to provide

university support in terms of infrastructure required for e-learning platform such as

different devices (e.g. desktop, laptop, mobile devices), network technologies (e.g. WIFI,

cellular services) and software platforms (e.g. programming language and model,

operating systems, network protocols and services) (Bergstedt et al, 2003) argues that

there is a relationship between administration and infrastructure in the e-learning

workflow system in the universities.

2.6.4 Relationship between system quality and information quality

In the model developed by Alsabawy et al. (2013) system quality acts as a

determinant of information quality. This is supported by appropriate theoretical

arguments found in the extant literature which underpin that there is a relationship

between system quality and information quality.

27
Dissertation

2.6.5 Relationship between system quality, information quality and e-learning SDQ

The research effort produced by Alsabawy et al. (2013) indicates that six

constructs (efficiency, privacy, fulfilment, contact, privacy and responsiveness) represent

e-learning SDQ. Accordingly Alsabawy et al. (2013) portrayed that both system quality

and information quality act as a determinants of all the six constructs individually (See

Figure 1).

28
Dissertation

Figure 1: Relationship between System Quality, Information Quality and SDQ variables

2.7 evaluations of e-learning facilities

In attempting to evaluate many e-learning programs, one of the major challenges

that has cropped has been how to handle the number of variables which impact on the

effectiveness of the programme and deciding what constitutes the dependent and

independent variables in a given situation. Several literatures and the study of existing

evaluation practice, suggests that many evaluation tools and criteria tend to disregard

many of these variables. A lot of the existing practices are mainly focused on the

technology aspect and on learner reaction to the use of the technology. Socio-economic

factors such as class or gender are seldom considered and even learning environment

variables such as the subject environment are all too often ignored. Not only does this

result in limitations in the data available on the use of ICT in learning but the limited

recognition of the different variables can distort analysis of the weaknesses (and

strengths) in current e-learning provision (Ardito et al, 2006).

29
Dissertation

Selim (2007) defines evaluation is as the purposeful gathering, analysis and

discussion of evidence from relevant sources on the quality, effectiveness, and impact of

provision, development or policy. The measurement of a students feedback is viewed as

the most important component of quality checks. But there has been mixed reports as to

its effectiveness. For instance, Guru and Drillon (2009) argue that analyzing users'

perceptions with regards to e-learning system would offer valuable data to evaluate and

improve its functionality and performance. Consequently (Ardito et al, 2006) dismissed

reports from their research findings by stating that student feedback was not always fully

adequate to support quality enhancement. So a researcher is cautioned that they will need

to make judgments in this area, and maybe conduct further research to validate the

deductions found.

The evaluation of e-learning has developed to form a more detailed framework

with five major clusters of variables emerging such as individual learner variables,

environmental variables, technology variables contextual variables and pedagogic

variables. All these can be broken down into more precise groups and further simplified

until individual variables can be identified and isolated. A clear distinction between

quality assurance and evaluation was tried to be explained by Deepwell (2007), who

views evaluation as an instrument of quality enhancement rather than quality assurance.

Wang (2006) identified learning effectiveness, access, student satisfaction, faculty

satisfaction, and cost effectiveness as the five pillars of quality of online programs.

Individual learner variables include aspect such as physical characteristics (e.g.

age), learning history, learner attitude (either positive or negative), learner motivation and

familiarity with the technology. Learning environment variables include the immediate

30
Dissertation

(physical) learning environment, the organizational or institutional environment and the

subject environment. Contextual variables will include socio-economic factors (e.g. class,

gender), the political context and cultural background. Technological variables include

hardware, software, connectivity, media and mode of delivery. Pedagogic variables

include Level and nature of learner support systems, Accessibility issues, Methodologies,

Flexibility, Learner autonomy, Selection and recruitment, Assessment and examination,

Accreditation and certification (Ozkan&Koseler, 2009).

There exist a lot of handbooks on subject of e-learning which focus primarily on

evaluation. The evaluation methods and tools differ widely. But what they do have in

common is that they recognize the importance of evaluation with most proposing that

evaluation should be an integral part of any e-learning initiatives or development. In this

regard, they tend toward a management model of evaluation. The major aim of the

evaluation is to offer feedback to influence e-learning, implementation and future

development (Paechter, Maier &Macher, 2010).

2.7.1 Tools and instruments for evaluation of e-learning

There exists numerous literature that offer details on tools for the evaluation of e-

learning. However, these are mainly divided into two types. Firstly there are many on-

line data gathering instruments for assessing the user interface characteristics and

secondly devices to record and analyze usage by duration and frequency of log-in, pages

accessed user profile. Many of these are complex in their design and ingenuity but lack

guidance on interpretation and analysis (Ozkan&Koseler, 2009).

31
Dissertation

Return on Investment (ROI) reports

The numerous reports that exist arise from industry based examples and are

written from a human resource department perspective. They draw from the conclusion

that the investment was cost-effective and represented value-for money, not limiting to

the fact that in most cases the savings are defined in efficiency rather than effectiveness

with no long-term impact analysis that takes account of unintended outcomes and

consequences. It is also difficult to compare figures across reports because the

distinctions between net and gross costs, capital and revenue costs, displacement of

existing funds, costs over time etc. are often blurred or missing. Much return on

investment type evaluation reports appear to be justifying investment rather than

evaluating it and more geared to an audience of shareholders rather than researchers

(Strother, 2002).

Benchmarking models

This refers to systems employed in comparison of process and performance with

several attempts being made to generate sets of criteria for quality assuring e-learning.

These however, tend to be twisted towards proposing quality standards for e-learning

systems and software which often disregard key variables in the wider learning

environment or are based on criteria associated with evaluating traditional learning

processes (and which disregard the technology) or criteria associated with measuring

learner achievement through traditional approaches (Lee, Potkonjak, &Mangione-Smith,

1997).

32
Dissertation

Product evaluation

The greatest number of focus on evaluation of e-learning is reports that describe

particular education software. The vast majority of these reports are commissioned or

published by the software developers. This is not to question the usefulness of these

reports or necessarily to doubt their validity but evaluation of decontextualized software

is not an acceptable substitute for the rigorous evaluation of elearning systems (Sae-

Khow, 2014).

Performance evaluation

For instance, as postulated by Scrivens (2000), the USA makes use of the term

performance evaluation for what would, in European terms, be referred to as student

assessment. Examination of student performance is by no means the only means that can

be employed although it is a powerful indicator of the effectiveness of e-learning.

Moreover, a survey of reports on performance evaluation in the context of e-learning

were mainly concerned with on-line tools and instruments for examining knowledge-

based learner performance and could therefore be categorized under that heading. To

combat this, there are eight factors to examine when evaluating e-learning. These factors

would help to determine whether the program is worth your time and effort within your

organization.

a) Instructional design

The first area to consider is the instructional design of the content. Regardless of

delivery method, a good learning initiative will conform to some instructional process or

model. These can either be but not limited to popular models such as ADDIE model

which was initially developed in 1975 by Florida State University for use by the U.S.

33
Dissertation

Armed Forces, the Dick and Carey Model that is a bit more sophisticated and complex

than ADDIE, or the ASSURE model which is more popular with the K-12 academic set.

Regardless to the model adopted, the common and most critical components are the

identification of learning objectives. They can be evaluated in regards to what the e-

learning program claims it will do for the learners? Whether it is viable to truly measure

the objectives that the e-learning sets out to instruct against, or are the learning objectives

weak (Reeves, Benson, Elliott, Grant, Holschuh, Kim &Loh, 2002)

b) Level of interactivity

One means by which to interpret interactivity is in the combination of ways in

which the learner engages in the content, from passive page turning to the much more

engaging situation-based scenario while there's no set formula or minimum threshold, a

good e-learning program should incorporate many of these instructional delivery

strategies. The more strategies that are used, the better the interactivity is for the learner.

And the more the learner is engaged with the content, the better the learning experience

and, potentially, the higher the retention. Using more interactive strategies caters to more

learning preferences, but it also means more development time and higher costs, too

(Govindasamy, 2001).

c) Visual impact

Learning content must look appealing enough to engage the learners from the start

to finish.. Otherwise they will tune out before giving the content a chance. This is not fair

but is a reality which must be put into context. For instance, during trainings (whether

online or instructor led), if your visuals dont appeal to the learners sight, the learner has

34
Dissertation

a higher chance of disengaging even if the content has a great message. Examine the look

and feel of the learner and determine whether they are engaging and professional. In

addition, even if the graphics are engaging, ask yourself if they are right for the audience.

Do they reflect the brand of the learning program, the module, or organization overall?

Are the graphics and text relevant (Koohang& Du Plessis 2004).

d) Language

In any learning, clear language use is a vital key, but in a face-to-face situation a

good facilitator can see when students do not understand a word or are confused by a

concept and then can elaborate as needed for comprehension. This is however not the

case with asynchronous e-learning, so clarity of message and the semantics used have to

be selected with great care. Approach the e-learning's language and tone from two

different perspectives: target learners' knowledge and target learners' demographics. For

instance, Target learners' knowledge is a jargon used that is appropriate for the target

audience? Are the examples and scenarios used universal to the group or are they too

specific to the experiences of some? Is the learning well written? On the other hand,

Target learners' demographics is the tone used in the learning in conjunction with the age

of the learners? What is the perceived language proficiency of the learner in relationship

to the content? For example, if English is the language used, what is the perceived

comprehension level of the learner? Are the examples used universal to this audience or

exclude some? For instance, if sports analogies are used, is that appropriate for the

audience? Finally, if humor is used in the learning is it appropriate or could it be

misinterpreted by some audiences? Humor is a great strategy for keeping audience

35
Dissertation

attention, but if used incorrectly it can greatly distance a learner from the learning

(Koohang& Du Plessis 2004).

e) Technical functions

If you break down the technology facet of the learning it can be approached in

five areas.

Course interface and navigation - Do the buttons take the learner where they're

supposed to and function as intended? Are icons clear and used consistently? Is the e-

learning intuitive to use for learners who are new to e-learning? If not, does it include a

how-to section on maneuvering through the e-learning? (Ozkan&Koseler, 2009)

Content display and sound - Do the font, text, and images look as intended? If content

isn't displayed correctly, is it due to a plug-in and are the needed plug-ins available for

easy download and updating? Does audio sound as it should through the organization's

infrastructure, or does it sound distorted or jumbled? (Ozkan&Koseler, 2009)

Accessibility - Is the module Section 508 compliant? In other words, does it meet the

criteria of "accessibility" identified in the U.S. Rehabilitation Act, which mandates that

learners with differing abilities be able to access the content in an equitable way? In

addition, is the e-learning technically accessible by all potential learners? What if a

learner can't access the Internet? Can he still take the learning somehow?

(Ozkan&Koseler, 2009)

Hyperlinks and files - Do the links take the learner to where they're supposed to? If

there's a link to a file, is that file (such as a PDF) there? Do external hyperlinks work as

expected?

36
Dissertation

Learning Management Systems and help - If the e-learning connects to your

organization's learning management system, is it sharing the data like it's supposed to?

Are help screens available to learners? Does the learning identify where learners can turn

should they run into technical or content-related issues? (Ozkan&Koseler, 2009)

In some cases, the above areas overlap. For example the LMS functionality may be

because of an organization's intranet capabilities, or the audio of the learning may sound

terrible because of the sound capabilities of the computers in the organization. The point

here is to determine whether the learning isn't providing the expected experience because

of the limitations of the organization or the limitations of the learning module itself. In

either case, if it doesn't work well for you as an evaluator it won't work well for your

learners, either (Ozkan&Koseler, 2009).

f) Time

Another area of focus should be related to the length of time taken on the learning

module. First, how long does it take a learner to complete the learning? Some experts

look at attention span to determine a "good" length of time for an online module; research

suggests between 15 and 30 minutes for each topic or module as a good guideline.

Putting the attention span and our time concept aside for a moment, answer this question:

Does the learning meet the stated learning objectives? If so, the overall length of the

learning program should be as long as it takes to meet the overall learning objectives.

These two concepts may seem counterintuitive, but they're not at all. If the e-learning is

good overall but longer than the suggested timeframe to keep learners engaged, you could

simply separate the content into pieces. That holds the integrity of the learning, but better

fits the 15- to 30-minute delivery suggestion. However, if the timing is but one variable

37
Dissertation

of the learning that you would not consider good, then it may not even be worth this

chunking approach (Ozkan&Koseler, 2009).

g) Cost

If the e-learning scores brilliantly in all the above-noted criteria, what if it's too costly to

purchase or maintain? There are many ways to examine the costs of running any training

program, but the best way to think about it is to be consistent. Does your organization

already calculate a cost-per-learner metric or have some other way to determine the cost

of running a learning programonline or noton an annual basis? If not, you should.

First, determine the costs of running an existing program by determining all the costs for

developing the course (instructional designer costs, time, travel costs, purchasing cost,

and any annual fees for maintaining the course such as an LMS, conference center rental,

or annual licenses). Then divide this number by the number of learners who have or will

experience the course in the calendar year. Now you have your annual cost-per-learner

metric (Ozkan&Koseler, 2009).

Once you calculate the cost per learner for existing programs, calculate it for the online

program you are evaluating. You probably will have to estimate some of the figures in the

formula (for example, how many learners will go through the program during the first

year). Where does the online program fall with the distribution of all your programs? This

gives you a good way to compare this potential program with existing ones based on

operational costs. Any e-learning endeavor does have some nonfiscal benefits that also

could be considered as part of its value, mainly reusability. While upfront development

costs (or purchase costs, if it's off-the-shelf) can be seen as higher than creating

instructor-led training, as the learning is reused the return-on-investment increases.

38
Dissertation

Conversely, instructor-led training costs tend to remain the same or increase over time. So

when discussing value, consider the cost and management annually but also determine

whether its reusability, the consistency of message, and other advantages of the e-learning

are worth the investment by your organization (Ozkan&Koseler, 2009).

h) Team effort

This is just one approach to evaluating the quality of an e-learning programseven areas

plus a look at a weighted average of importance. You may know another or develop a

different approach for your organization. Regardless of the methodology you use it's best

to use a team-based approach to evaluation. Get a team together and compare notes using

the same criteria: What were the top scoring areas of the seven scales? Compare and

contrast and talk. Find out what your team thought of the learning and if it's worth it to

your learners. By taking a group approach you help to minimize rater bias and get a better

holistic view of the impact and potential effectiveness of the e-learning for your

organization. Aristotle said, "Quality is not an act, it is a habit." Instill and evaluate

quality in your learningwhether it's delivered online or off (Ozkan&Koseler, 2009).

39
Dissertation

2.8 Ways through which costs of evaluating e-learning facilities can be reduced

According to Zygouris-Coe, et al. (2009), instituting of a well-structured quality

assurance process can be expensive and time consuming, but in the long run can be worth

the effort. This is categorically supported by the study undertaken by Rajasingham (2009)

which states state that the merit, quality and success of the e-Learning programme they

investigated were mainly due to the proper application of the quality assurance strategies.

Rajasingham (2009) continues to note that new educational paradigms and models that

challenge conventional assumptions and indicators of quality assurance are becoming

possible with the help of the increasing sophistication in information technology. Training

is always a necessity in every field, an aspect which makes it costly to develop and

deliver as it contributes a large part of the total cost of the business. Before we look at the

ways with which to cut the training cost, we should first consider the different aspects of

the training costs available. These include:

The tutor
Incentives to be used
Traveling cost
Cost of Training Environment
Material Developing Cost
Maintenance cost
Cost of evaluation

All these aspects cant be scrapped off with intent of cutting on training cost

which would only leave an option for means which would instead increase ease of

learning, increase success rate and cut error rate, increase productivity, increase

management of users, is easy to use and understand, and cut the cost of learning/training .

The cost of evaluation of e-learning facilities arise from the necessity in counter checking

40
Dissertation

the means in order to deduce whether or not it conforms to required standards ; and if it

would yield desired results. The longer an evaluation is performed, the greater the cost

that will be incurred Rajasingham (2009)

A well-designed evaluation would cost tons of money but just how much would

depend on the experience and education of the evaluator, the type of evaluation to be

used, and the geographic location of the program. Tips that would be employed in order

to minimize the cost of e-learning evaluation would be:

a) To look for a qualified and inexpensive evaluator.


b) To look for an evaluator who may be able to get independent funding. The main

problem with this method is that you would have to wait for a longer time to get

such.
c) It would also be advisable to make use of existing data.
d) To explore other avenues such as an evaluator who is interested in branching out

and trying new things. Sometimes an evaluator will work for less in order to have

an opportunity to do research on a new topic.


e) Look for an evaluator who has experience in evaluating programs like yours.

Again, this will save money because the evaluator is already familiar with

instruments, design issues, and other aspects of the study.

41
Dissertation

2. 9 How an e-learning facility can be evaluated from a student perspective

Lapointe&Reisetter (2008) argue that the new reality of online learning

demands a reassessment of our understanding of what makes for the most productive

student engagement. The findings reported below are intended to help move towards an

answer to this question. Successful use of online communication in courses has been

reported by a number of researchers. Many of these courses had either been delivered

online or had incorporated a blended approach as an additional means of learner support

in delivering the online courses. However, there are variations in the reported benefits of

e-learning. When evaluating e-learning facilities from a students perspective, several

factors would need to be considered. Such factors would include;

1. Student development (student study habits, workload, their overall impression of the

module)

2. Assessment (assessment task design, the level of feedback received)

3. Student perception of the learning materials (how well they facilitated learning,

interest generated, difficulties they encounter, overall presentation of learning (materials)

4. Effectiveness of face-to-face contact (the organization, knowledge, facilitation skills

of lecturer)

Oliver (2000) argued that evaluation plays three vital roles such as:

a. Identification of the information needs of users


b. The usability of the web-based portal site
c. The selection of materials.

By making use of the reports Ogunleye (2010) was able to deduce that students

performed better in their respective courses when the system of e-learning was adopted,

42
Dissertation

while Owens, Hardcastle and Richardson (2009) discovered that e-learning would

provide psychological support, reduce the feeling of isolation and the rates of drop-outs.

Online discussion also encouraged introverts and students of non -western cultures who

are more reflective and tend not to respond so quickly in face-to-face discussion to

express their views Ogunleye (2010).

Concurrent studies by Hollenbark (1998) showed that learners have become more

autonomous in e-learning while MacDonald, Stodel, Farres, Breithaupt& Gabriel (2001)

believed that learners are now more critical in their thinking and more effective in

knowledge synthesis (Borns, 1999). Depending on their motivation, some learners may

only participate in activities that they consider more fruitful. For example, Sluijsmans,

Moerkerke, Van Merrienboer&Dochy (2001) reported that some learners actively sought

ways to aid their performance on assignments and therefore, such learners may only

participate in online discussion if it is linked to their assessment. According to Clark

(2001), linking online discussion to grades would ensure a high participation rate.

Students also tend to take more responsibility for their own learning when using

e-learning than students in a traditional course. For this reason, evaluation of

students learning behavior should instead focus on learning behavior rather than on

teaching behavior. Other student based factors are also important in evaluating e-learning

facilities in order to promote teaching and learning student attributes such as level of

facility interaction the retention rates (Borns, 1999).

According to Willcoxson& Prosser (1996) learner characteristics may also

reflect many demographic attributes such as readiness, learning styles and motivation

to learn. Differences in learning styles are as a result of such things as past life

43
Dissertation

experiences and the demands of the present environment. Willcoxson& Prosser (1996)

further identified four learning styles: The converger, the diverge, the assimilator and the

accommodator.

A converger uses abstract conceptualization and active experimentation, while the

diverger works best in the presence of concrete experiences. The assimilator creates

models for the task at hand, while for the accommodator, learning in best conceived as a

process. Birkey (1994) identified two of the learning styles; a ccommodator and

converger, as very significant predictors for students choosing classes with high computer

usage. This is because both of these learning styles have active experimentation as a

common learning mode. On the other hand, Jonassen and Grabowski (1993) identified

the two other learning styles; i.e. assimilators and divergers, as more thought intensive,

imaginative and intuitive as they use sound logic as an approach to problem-

solving. Divergers tend to be open-minded and assimilators deal well with systematic

and scientific approaches. The various learning styles mentioned play very important

roles in a learners ability to create web pages.

In the modern times, all people alike ranging from educators, teachers, researchers

and students are well informed of the potential of web technology with many of them

adopting it for creation of a new learning environment. This has consequently led to a

large collection of educational websites. One objective of this is the belief that

certain unique features of the technology such as its powerful information,

manipulation tools and communication means) can substantially contribute to the

teaching and learning process. For example, the information manipulation functions, such

44
Dissertation

as generating, transmitting, storing, processing and retrieving information, are at the heart

of educational transactions according to Mioduser, Bnachnifil, Lahav and Onan (2000).

The ease of use of the facility

Kirkpatricks Four Levels of student interaction Evaluation can be used to assess

facility from a students perspective. This is divided into different levels namely reaction,

learning, behaviour, results and return on investment.

1. Reaction

Evaluation at this level measures how the participants in a training program feel

about their experience. It takes assessment of several rising questions such as if they are

satisfied with what they are learning, if they regard the material as relevant to their work,

whether they believe the material will be useful to them .This level does not measure

learning but it simply measures how well the learners liked the training session.

Corporations are beginning to gather more data on how their trainees feel about the use of

e-learning technologies. For example, the following results were obtained from an

ASTD-MasieCenter study involving the experiences of more than 700 e-learners

(Kirschner&Paas, 2001).

Eighty-seven percent preferred to take digital courses during work hours.

Fifty-two percent preferred e-learning in a workplace office area.


Eighty-four percent would take a similar e-course if offered again.
Thirty-eight percent said they generally preferred e-learning to classroom training.

45
Dissertation

2. Learning

Kirkpatrick defined learning as the principles, facts, and techniques that are

understood and absorbed by trainees. When trainers measure learning, they try to find out

how much the skill, knowledge, or attitudes of their trainees have changed with respect to

the contents being taught. Measuring learning requires a more rigorous process than a

reaction survey. Ideally, both a pretest and posttest are given to trainees to determine how

much they learned as a direct result of the training program. While many organizations do

not measure at this level, other corporate training centers, such as Sun Corporations

Network Academy, keep careful track of what employees have learned through the use of

both pretests and posttests (Alliger&Janak, 1989).

What do Research Studies Show About E-Learning?

Compilation by Jonassen and Grabowski (1993) alludes to fact that there is No

Significant Difference as Phenomenon provides one of the most frequently quoted

rationales for the power of e-learning. This research body demonstrates that no significant

difference can be found no matter what medium is used for learning. In many of these

studies, the model is asynchronous learning delivered to the learner on demand. The

findings demonstrate that even with no instructor or face-to-face interaction, there are no

significant differences in the amount of content learned. A related website, supported by

TeleEducation NB, New Brunswick, Canada, includes extracts from more than 355

research reports, summaries, and papers supporting the No Significant Difference

phenomenon. This is one time that a finding of no significant differences is actually a

compelling factor in favor of e-learning. If corporations can get all of the advantages of e-

46
Dissertation

learning with the same level of results as an instructor-led classroom situation, then the

economic advantage for e-learning becomes even stronger.

Wegner, Holloway, and Garton (1999) provide an example of a study showing no

significant differences between the test scores of experimental (e-learning) and traditional

(classroom-based) students at Southwest Missouri State University. Although there were

no statistically significant differences in test scores, this two-semester study yielded

qualitative data that indicated that students in the e-learning group had, overall, more

positive feelings about their experience than did the control group. This observation is

consistent with those found in a number of the no significant difference studies.

However, it is becoming more common not to find the same level of results.

While some studies show greater benefits in favor of face-to-face delivery, research

results consistently demonstrate superior benefits of e-learning in general. In addition to

higher performance results, there are other immediate benefits to students such as

increased time on task, higher levels of motivation, and reduced test anxiety for many

learners. Wegner et al (1999) report that, while the majority of the 49 studies they

examined reported no significant difference between e-learning and traditional classroom

education, nearly 30 percent of the studies report that e-learning programs had positive

outcomes based on student preference, improved grades, higher cost effectiveness, and a

higher percentage of homework completion.

An alternate website to the No Significant Differences one, also supported by

TeleEducation NB, features comparative studies that do show significant differences,

most of which report positive results in favor of e-learning. For example, Wegner et al

(1999) evaluated a Web-based psychology course and reported that content knowledge,

47
Dissertation

use of the WWW, and use of computers for academic purposes increased while computer

anxiety decreased. Navarro and Shoemaker reported, ...we see that cyber learners

performed significantly better than the traditional learners. Mean score [final exam] for

the cyber learners was 11.3, while the mean score for traditional learners was 9.8. With a

t-test statistic of 3.70, this result was statistically significant at the 99 percent level

(Thurmond &Wambach, 2004).

Along these same lines, a California State University Northridge study reported

that e-learners performed 20 percent better than traditional learners (Strother, 2002).

(Strother, 2002) reported a significant difference between the mean grades of 406

university students earned in traditional and distance education classes, where the

distance learners outperformed the traditional learners. In a study within the insurance

industry, Redding and Rotzien (1999) reported that the online group is the most

successful at cognitive learning as measured by the end of course examinations... The

results of the study do provide strong support for the conclusion that online instruction

for individuals entering the insurance field is highly effective, and can be more effective

than traditional classroom delivered instruction.

Similar results in support of e-learning came from Asynchronous Learning

Networks (ALN) (2001), which reported a summary of empirical studies submitted to

them. From the 15 papers in which the effectiveness of ALN was compared to that of

traditional classroom instruction, two-thirds reported e-learning to be more effective. The

remainder of the papers reported no significant difference. Strother (2002) stressed the

crucial need to develop critical thinking and other higher order skills among students

using e-learning products. Earlier, Bates noted that: the potential for developing higher

48
Dissertation

order skills relevant to a knowledge-based society is a key driver in developing computer-

based distance education courses. Examining how learners engage in higher order

thinking is the topic of a research study at Massey University in New Zealand Strother

(2002). White (1998) examined strategies of 420 foreign language learners at that

university and reported that distance learners made greater use of metacognitive

strategies what individuals know about their own thinking compared to classroom

learners, most notably with regard to strategies of self-management and advance

organization and, to a lesser extent, revision. In a study of the infusion of technology in

education, Serrano and Alford (2000) conducted research that clearly showed that

incorporating technology across the curriculum acts as a catalyst for all learners. They

concluded that e-learning empowers students to engage actively in language-content

learning tasks and to develop higher-order critical thinking, visualization, and literacy

skills.

While developing critical thinking and other higher-order skills is undoubtedly a

desirable goal in a purely academic setting, it may be less important in the areas of

specialized job-related content delivery or skill-building associated with many types of

corporate online training programs. This is yet another evaluation issue that needs to be

addressed in this arena.

3. Behavior

Even well informed, quantitative learning objectives do not typically indicate how

the trainee will transfer that learning to job performance. Changed on-the-job behavior is

certainly the main goal of most corporate training programs, but measuring this change is

a more complex task than eliciting trainees feelings or measuring their direct learning

49
Dissertation

through test scores. In a number of studies included here, there is an assumed connection

between measures of behavioral change and the hoped for consequence: solid business

results (Level IV), although in most cases, empirical measurement is lacking. In their

overview of the evaluation process, Bregman and Jacobson (2000) discuss the need to

measure business results rather than just evaluate trainee test results. They point out that

all important business results affect customer satisfaction, either directly or indirectly.

Business results that may increase efficiency or help short-term profits but do not

increase customer satisfaction are obviously bad for business. These authors claim that

changes in customer satisfaction due to training of sales or service personnel are easy to

measure by asking the customers of the trainees to compile reaction surveys. Generally,

reaction sheets for customers get high response rates; therefore, a valid connection

between the effects of training on the employee and how the customer feels about that

employee can be made. Bregman and Jacobson summarize that a training program

succeeds, by definition, when the training changes employees behaviors in ways that

matter to their customers.

Unilever claims that e-learning helped their sales staff produce more than US$20

million in additional sales (Bregman and Jacobson, 2000) Level IV evaluation. They

track the results of their e-training programs by asking course participants to take part in

a teleconference several months after the course. Participants are asked to discuss how

they have integrated their new skills into their work and to share their best practices

Level III evaluation. Uniacke, the person in charge of Unilevers training program, points

out that many results of e-training programs are difficult to measure. For example, he is

convinced many employees do not learn new material, but rather they polish their overall

50
Dissertation

skills and customer interaction techniques still a significant benefit to the company and

its overall bottom line.

As a number of authors have pointed out, it seems that traditional trainers

incorporate the first three levels routinely in the design of training programs (Boverie,

Mulcahy, and Zondlo, 1994). In a more recent report on e-learning evaluation, Hall and

LeCavalier (2000) make a strong case for focusing on Level III with job performance-

based measures. Their research study of eleven U.S. and foreign companies helped them

identify best practices within these companies, which have significant e-learning success

stories. They conclude that the most promising strategy for many companies is to focus

on Level III to find out what is really effective within e-learning programs.

4. Results

This level evaluation attempts to measure the results of training as it directly

affects a companys bottom line a challenging task for many reasons with respect to

concept grabbed by the trainees during perion of training.. Kirkpatrick (1999) noted that

the number of variables and complicating factors make it difficult, if not impossible, to

evaluate the direct impact of training on a business bottom line and this is just as true

for e-learning as for traditional training programs. While reduced costs, higher quality,

increased production, and lower rates of employee turnover and absenteeism are the

desired results of training programs, most companies do not address this complex

evaluation process. However, some companies strive to make the difficult link between

training and improved business results.Some firms are beginning to measure e-learning

results for their sales force in terms of increased sales, like in the case of Unilever. In a

different approach to business results, Bassis research (2001) demonstrates that

51
Dissertation

investment in training add to the value of a companys shares a high priority for

corporations and she claims that there is added value regardless of overall market

conditions.

5. Return on Investment

To use Phillips Return on Investment calculation as an added level to

Kirkpatricks model requires a rather detailed and complex evaluation and calculation

process. Using this levels evaluation data, the results are converted into monetary values

and then compared with the cost of the training program to obtain a return on investment.

In respect to these developments, one can therefore arrive to the conclusion the that

online course programs, in particular e-learning contribute intensely to collaborative

and cooperative learning. It also serves to enhance students knowledge of course content

by creating examples and application in relevant literature/websites. However, one

important conclusion from this study is that there is not a single right way for online

course delivery. Although the development of e-learning is still in its infancy, the findings

of this study provide the necessary guidance in designing instructions for e-learning for

students, as well as identifying certain constraints that could affect students attitudes to

e-learning, such as availability of resources. The findings also show that much more still

needs to be done to arouse interest in online course delivery. The implication of this is

that evaluation in the context of e-learning must involve the learner, the resources

available to students, and how to arouse their interest to trigger better results (Strother,

2002).

52
Dissertation

3.0 CHAPTER THREE: METHODOLOGY

3.1 Introduction

The methodology section critically evaluates the various research approaches that

were undertaken with specific regards to the selection of research type, research

philosophy, research tradition, sampling methods, research design, research instruments

used and data analysis methods that were employed. The research that was undertaken

employed the use of both quantitative and qualitative research designs in which the main

form of data collection was done through the use of questionnaires and interviews. The

choice of a mixed research design was guided by the fact that there are various

advantages associated with the use of a mixed research design. For instance, as postulated

by Ivankova, Creswell and Stick (2006), one of the advantages associated with the use of

a mixed research design is that it leads to higher levels of research objectivity, validity

and reliability in the sense that the researcher is able to leverage on the advantages

associated with both the qualitative and the quantitative research designs. Additionally, as

postulated by Johnson and Onwuegbuzie (2004), another advantage associated with the

use of mixed methods is that it enables effective data triangulation which leads to higher

levels of data credibility. Moreover, using the mixed methods in undertaking a research

enables the researcher to have reduced research bias due to employing diverse research

methods that are characterized with minimal research bias. In essence, the mixed research

design was mainly used in order to complement the strengths of using a single design

while at the same time overcoming the weaknesses associated with the use of a single

design.

53
Dissertation

Moreover, the research employed the use of simple random sampling in selecting

the final research participants. In the use of random sampling, participants were randomly

picked from a sample population of 200 students. Moreover, the research employed the

use of one sample t-test and spearman rho coefficient as the main statistical data analysis

methods. On the other hand, interview responses were analyzed using thematic analysis

in which a frequency count of common themes was used to determine the percentage

occurrence of a theme in the participants responses.

3.1 Research tradition

As postulated by Trevio and Weaver (1999), there are two main research

traditions that a researcher can employ in undertaking a research. They are the deductive

and the inductive research traditions. The deductive research tradition is mainly focused

the development of hypothesis and later an approval or a rejection of the formulated

hypothesis based on the results obtained from the analysis undertaken. Moreover, in the

deductive approach, a researcher develops hypothesis from the research question and then

developing a framework for rejecting or adopting the hypothesis based on the results

obtained. For instance, Saunders, Lewis and Thornhill (2006) states that a researcher

needs to develop both the alternate as well as the null hypothesis.

On the other hand, the inductive tradition is based on studying behaviour and

offering a conclusion based on a theoretical framework (Saunders et al, 2006). Moreover,

Babbie (2010) states that most inductive research traditions employ a qualitative design

in which the qualitative research design is considered as a research design the emphasizes

on studying human behaviour from a social phenomenon perspective. On the other hand,

54
Dissertation

Babbie (2010) states that most quantitative researches employ the deductive research

tradition. The research that was undertaken employed both the inductive and the

deductive research traditions in the sense that the research emphasized on the use of both

qualitative and quantitative research designs.

3.2 Research setting

A research setting involves various aspects including the population sample that

was employed in the undertaking of the research, the geographical niche in which the

research was undertaken and the application of the research findings obtained. For

instance, the research that was undertaken was mainly focused on identifying the various

aspects that should be considered in undertaking evaluation of eLearning facilities.

Moreover, the research that was undertaken was aimed at development of an eLearning

framework based on the constructs of eLearning evaluation facilities. Data was collected

through the use of interviews and questionnaires on students in one of the prestigious

higher learning institutions in Asia. The above analysis implies that the setting of research

that was undertaken can only be applied in higher education institutions that employ the

use of eLearning. Moreover, the geographical location is pinned to the area in which the

research was undertaken in this case in Asia.

3.3 Research philosophy

As defined by Gliner and Morgan (2000, p. 17) a research philosophy is

considered as a way of thinking about and conducting a research. It is not strictly a

methodology, but more of a philosophy that guides how the research is to be conducted.

55
Dissertation

Additionally Research paradigm and philosophy comprises various factors such as

individuals mental model, his way of seeing things, different perceptions, variety of

beliefs towards reality, etc. This concept influences the beliefs and value of the

researchers, so that he can provide valid arguments and terminology to give reliable

results1. According to Collis and Hussey (2003), a research can employ the use of two

main research philosophies which are the phenomenological research philosophy and the

positivist research philosophy. Most qualitative research design employ the

phenomenological research philosophy while on the other hand, most quantitative

research designs employ the positivist research philosophy (Creswell, 1994).

As postulated by Bryman (2004), the positivist research paradigm is mainly

focussed on the development of hypothesis and then undertaking empirical data analysis

in order to either adopt or reject the null hypothesis. For example, Like the resources

researcher earlier, only phenomena that you can observe will lead to the production of

credible data. To generate a research strategy to collect these data you are likely to use

existing theory to develop hypotheses. These hypotheses will be tested and confirmed, in

whole or part, or refuted, leading to the further development of theory which may then be

tested by further research2. Additionally, positivist research philosophy emphasizes on

enhancing objectivity in the research undertaken. For instance, According to this

paradigm, researchers are interested to collect general information and data from a large

social sample instead of focusing details of research. According to this position,

1http://dissertationhelponline.blogspot.com/2011/06/research-philosophy-
and-research.html

2http://citeseerx.ist.psu.edu/viewdoc/download?
doi=10.1.1.102.4717&rep=rep1&type=pdf

56
Dissertation

researchers own beliefs have no value to influence the research study. The positivism

philosophical approach is mainly related with the observations and experiments to collect

numeric data3.

On the other hand, most qualitative research designs are based on the

phenomenological research philosophy. For instance according to Tarozzi and Luigina

(2010, p. 19), The object of phenomenological research is the participants experience of

phenomena, the way in which consciousnesses give meaning to their world in an inter-

subjective dimension. Experience, where phenomenological social research is located, is

the description of the phenomenon as it appears to the researchers consciousness. The

research that was undertaken was based on the use of both the phenomenological

research philosophy and the positivist research philosophy. The use of both the

phenomenological research philosophy and the positivist research philosophy was based

on the fact that the research employed the use of both qualitative and quantitative

research designs.

3.4 Research trustworthiness

Research trustworthiness can be generally defined as the level of accuracy in the

results obtained from a research undertaking. However, obtaining higher levels of

research trustworthiness in a qualitative research is considered as a daunting task

comparatively to obtaining higher levels of trustworthiness in a quantitative research. The

same sentiments were also postulated by Shenton (2004). As postulated by Shenton

(2004, p. 63), The trustworthiness of qualitative research generally is often questioned

3http://dissertationhelponline.blogspot.com/2011/06/research-philosophy-
and-research.html

57
Dissertation

by positivists, perhaps because their concepts of validity and reliability cannot be

addressed in the same way in naturalistic work. In order to promote higher levels of

research trustworthiness the following parameters were considered in undertaking of the

research; credibility, , transferability, dependability and conformability (Guba, 1981).

3.4.1 Credibility

Credibility can be defined as the level of truthfulness depicted in the undertaking of a

research. A researcher can employ various strategies that enhance high levels of

credibility in the undertaking of the research paper. For instance, some of the strategies

that can be used to enhance credibility in undertaking a research include but not limited

to the following strategies; member-checking, peer debriefing, prolonged engagement,

triangulation, referential adequacy, persistent observation, and negative case analysis. The

following research strategies were employed in order to effectively promote high levels

credibility in the undertaking of the research; selecting the most appropriate data

collection methods (questionnaires and interviews), effective familiarizing with the

research region, and use of random sampling in the process of selecting the participants

for the research undertaking (Lodico et al, 2010).

3.4.2 Transferability

As defined by Denscombe (2010), transferability can be generally defined as the

process in which the results obtained can be applied to other similar research cases. For

instance, in order to enhance higher levels of research transferability, a researcher can

select a research population sample that reflects the various variables in the research

58
Dissertation

questions. Moreover, another strategy that can be enhanced in enhancing transferability is

through the use Think description. For example, as postulated by Lincoln and

Guba(1985), one strategy in which transferability can be enhanced is through the use of

think description. Lincoln and Guba (1985) describe think description as a way of

achieving a type of external validity. By describing a phenomenon in sufficient detail one

can begin to evaluate the extent to which the conclusions drawn are transferable to other

times, settings, situations, and people. In order to enhance, transferability, the research

that was undertaken was based on collecting data from students who have used eLearning

for more than 3 years. The above analysis depicts a population sample that suits well

within the research variables and the results obtained can be applied across many higher

education institutes that utilize eLearning.

3.4.3 Dependability

Dependability can be generally defined as the level at which the same results can

be replicated if a similar research was to be undertaken under similar research setting and

characteristics. For example, To check the dependability, one looks to see if the

researcher has been careless or made mistakes in conceptualizing the study, collecting the

data, interpreting the findings and reporting results. The logic used for selecting people

and events to observe, interview, and include in the study should be clearly presented.

The more consistent the researcher has been in this research process, the more

dependable are the results4. A good strategy that can be used to enhance dependability is

through the use of a dependability audit. Moreover, there are several strategies that can be

4http://qualitativeinquirydailylife.wordpress.com/chapter-5/chapter-5-dependability/

59
Dissertation

deployed by a researcher in enhancing higher levels of transferability. For instance, A

major technique for assessing dependability is the dependability audit in which an

independent auditor reviews the activities of the researcher (as recorded in an audit trail

in field notes, archives, and reports) to see how well the techniques for meeting the

credibility and transferability standards have been followed5. For instance, according to

Lincoln and Guba (2000) one effective strategy to check for dependability is through the

use of audit trails. For instance, according to Lincoln and Guba (1985), External audits

involve having a researcher not involved in the research process examine both the

process and product of the research study. The purpose is to evaluate the accuracy and

evaluate whether or not the findings, interpretations and conclusions are supported by the

data. For instance Lincoln and Guba (1985) states that external audits provides a

researcher with, an opportunity to summarize preliminary findings, an opportunity to

assess adequacy of data and preliminary results, and an important feedback that can lead

to additional data gathering and the development of stronger and better articulated

findings. In order to enhance higher levels of dependability, the researcher employed the

use of well framed and validated data collection methods such as interviews and

questionnaires.

3.4.4 Conformability

Conformability is mainly focused with enhancing high levels of objectivity in the process

of undertaking a research. According to Ghauri (2004), Confirmabilityis what objectivity

is to quantitative research. Researchers need to demonstrate that their data and the

5 [FN1]

60
Dissertation

interpretations drawn from it are rooted in circumstances and conditions outside from

researchers own imagination and are coherent and logically assembled.Confirmability

questions how the research findings are supported by the data collected. This is a process

to establish whether the researcher has been bias during the study; this is due to the

assumption that qualitative research allows the research to bring a unique perspective to

the study. An external researcher can judge whether this is the case by studying the data

collected during the original inquiry6. Moreover, Denzin and Lincoln (1994, pg 513)

states that, confirmability builds on audit trails...and involves the use of written field

notes, memos, a field diary, process and personal notes, and a reflexive journal. The

same analysis is also provided by Lincoln &Guba (1985, p. 319) when they stated that,

one major strategy that can be utilized in enhancing higher levels of conformability is

through undertaking audit trails. Audit trails are aimed at analyzing the various aspects of

the research in order to determine how the conclusions were arrived at. For instance as

postulated by Lincoln and Guba (1985, p. 320) there are six steps of undertaking audit

trails which are, (a) raw data (field notes, video and audio recordings), (b) data

reduction and analysis products (quantitative summaries, condensed notes, working

hypotheses), (c) data reconstruction and synthesis products (thematic categories,

interpretations, inferences), (d) process notes procedures and design strategies,

trustworthiness notes), (e) materials related to intentions and dispositions (study proposal,

field journal), and (f) instrument development information (pilot forms, survey format,

schedules). Confirmability was enhanced through undertaking audit trails in order to

6http://credibility-rsmet.blogspot.com/2011/11/ensuring-credibility-of-
qualitative.html

61
Dissertation

critically evaluate the processes process that was undertaken in arriving at the

conclusions.

3.5 Research delimitation

In most cases, research delimitation is used to define the boundaries and the scope

of the research undertaken. For example, according to Simon (2011, p. 2), The

delimitations are those characteristics that limit the scope and define the boundaries of

your study. The delimitations are in your control. Delimiting factors include the choice of

objectives, the research questions, variables of interest, theoretical perspectives that you

adopted (as opposed to what could have been adopted), and the population you choose to

investigate. For instance the research sought to investigate the various parameters that

should be considered in evaluation of eLearning. In order to achieve the above research

aim, the researcher collected data from a 150 students who had used eLearning systems

for 3 years and over. The above analysis basically implies the research was mainly based

on collecting data from eLearning students in Asia. According to Simon (2011, p.2), The

delimitations section of your study will explicate the criteria of participants to enroll in

your study, the geographic region covered in your study, and the profession or

organizations involved.

The research delimitation was based on the following inclusion criteria

Research participants

Higher education institutes students who have used eLearning

system for 3 years and more.

62
Dissertation

3.6 Participants (population and sample)

The target populations for this study were 150 students enrolled in higher

education institute who have used eLearning systems for 3 years and more. Additionally,

the population sample included 5 students enrolled in higher education institute who have

used eLearning systems for 3 years and more. A simple random sampling technique was

employed in selecting the final 150 participants from a total of 200 who were required to

fill in a semi-structured questionnaire in order to evaluate their responses. After selecting

the 150 research participants, a simple random sampling was further undertaken on the

remaining 50 participants in order to select 5 participants for interview sessions.

The use of simple random sampling was based on the fact that simple sampling

technique is characterized with the ability in which every member of the population

having the same probability of being selected into the final sample population. As defined

by Moore & George (2006, p. 10), a simple random sample is, A size n consists of n

individuals from the population chosen in such a way that every set of n individuals has

an equal chance to be the sample actually selected. Moreover, the choice was based on

the fact that simple random sampling is characterized with various advantages which

include but not limited to the following; simple sampling is a cost effective and cheap

method of sampling research participants, simple sampling is more effective when

sampling few number of participants and that simple sampling consumes less time

(Moore & George, 2006). Additionally, simple sampling was selected in the sense that,

the population that was used in undertaken the research was a small population that can

be effectively sampled using the simple sampling technique.

63
Dissertation

3.7 Instruments

The research paper employed the use of both primary and secondary data

collection methods. Primary data collection methods were undertaken through the use of

questionnaires and interviews. On the other hand, secondary data sources such as printed

materials, journal articles, and books were used as to supplement data that was collected

using the primary data collection methods.

3.7.1 Questionnaires

As postulated by CDC (2008, p. 1), A questionnaire is a set of questions for

gathering information from individuals. You can administer questionnaires by mail,

telephone, using face-to-face interviews, as handouts, or electronically (i.e., by e-mail or

through Web-based questionnaires). CDC (2008, p. 1) continues to state that,

questionnaires are appropriate in cases where the research involves large number of

participants. Additionally, according to CDC (2008, p. 1), in utilizing the questionnaire,

the researcher needs to collect data about behaviours, beliefs, knowledge and attitudes as

well as when the researcher needs to protect the privacy of the participants.

The questionnaire was developed with the help of a panel of experts. The panel of

experts included some qualified members who have worked with eLearning systems for

over than 10 years. The panel of experts were drawn from 3 higher education institutes

and provided guidelines on the development of a questionnaire that would enable

collecting data regarding evaluation of eLearning facilities. The questionnaire developed

was based on Likert scale type responses and was pilot tested to ensure reliability and

64
Dissertation

validity. The participants that were selected for the pilot study reflected the true nature of

the final sample participants. This was undertaken through applying the participants

inclusion criterion of the study which included participants drawn higher education

institute who have used eLearning for 3 years and more.

The pilot study consisted of an analysis a group of 20 students who had utilized

eLearning systems for at least 3 years. The first part of the questionnaire involved the

demographic characteristics of participants, while the second part of the questionnaire

included items regarding the various elements that should be considered in evaluating

eLearning systems. Moreover, the second part of the questionnaire involved relating to

how various higher education institutes could reduce costs in evaluating eLearning

facilities. The survey questionnaire contained questions that were aimed at asking the

respondent to rate their level of agreement on the Likert-type questionnaire. The scale

involved the following constructs: 1 denotes strongly disagree, 2 denotes disagree, 3

denotes neutral, 4 denotes agree, and 5 denotes strongly agree. The overall score of

participants on questionnaire item entered into SPSS version 20.0 in order to undertake

one sample t-test as well as spearman rho coefficient analysis.

The participants for the pilot study were then asked to complete the questionnaire

and also to comment on the items in the questionnaire. Later, the questionnaire was

modified based on the comments of the participants in order to ensure some level of

clarity in the final questionnaire that was utilized in undertaking the study.

Justification for using questionnaires

The choice of questionnaire was based on a number of advantages associated with

the use of questionnaires. For instance, questionnaires are characterized with various

65
Dissertation

advantages with regard to data collection instruments. For instance, According to Belk

(2006), some of the advantages associated with questionnaires include but not limited to

the following advantages; questionnaires can be used to collect data from a large

population, questionnaires are easy and fast in undertaking data analysis, questionnaires

are more objective due to the high level of standardization in questionnaires, and

questionnaires reduce biasness and are cost effective (Kuiper &Clippinger, 2012).

However, despite the various advantages that are associated with the use of

questionnaires as data collection instruments, questionnaires are characterized with a

number of disadvantages. For instance, questionnaires are quite complex to design and

develop, a factor attributed to the high level of standardization required in designing and

developing questionnaires. Additionally, respondents may have a tendency to forget vital

information especially in cases where the questionnaire is long and complicated. To

counter the above limitations associated with questionnaires, a short, simple and inclusive

questionnaire will be developed (Kuiper &Clippinger, 2012).

3.7.2 Interviews

To supplement the use of questionnaires, interviews were also used. Face to face

interviews were employed as a data collection method in order to provide firsthand

information regarding the interview questions. Also, the interviews were undertaken in

order to provide an insight into understanding the various parameters that should be

considered in undertaking eLearning facility evaluation (Kuiper &Clippinger, 2012).

Both the interviewer and the interviewee were able to clarify on issues of the research

that was being undertaken. This helped the interviewer obtain viable and authentic

66
Dissertation

information that was well elaborated and authentic (Belk, 2006). There are various

advantages associated with the use of interviews as a data collection instrument. For

instance, interviews are considered to be a flexible data collection tool. In cases where the

interview questions were not well understood by the interviewee, the questions were

rephrased by the interviewer in order to expound more. In most cases, interviews will

always allow one to learn about things and facts that cannot be observed directly and

finally it adds internal viewpoints to outward behaviours (Kuiper &Clippinger, 2012).

Despite the advantages mentioned above, according to Kuiper &Clippinger

(2012), there are various disadvantages associated with the use of interviews as data

collection instruments. For instance, interviews are a slow method of collecting data

because the process calls for interviewing one person at a time, cannot fully trace events

and trends that occurred in the past. Additionally, interview is an expensive tool to use; it

is also subject to respondent and interviewer bias. This was partly eliminated through

having a tight time and structural frame work that ensured everything was done on time

and appropriately.

Interview schedule

The following represents the interview schedule that was utilized in the undertaking of

the study

Interviewee Date
Student 1
Student 2
Student 3
Student 4
Student 5

Interview questions

67
Dissertation

The following are the research questions that were employed in the undertaking of the

interviews with the relevant stakeholders in the undertaking of the research paper.

1. In your own opinion do you think there is a positive correlation between evaluating

eLearning facilities from a student perspective and the level of student satisfaction?

2. What other methods are institutions of higher education employing in undertaking

evaluation of eLearning facilities?

3. From your own personal perspective, what are areas that need to be focused while

developing the evaluation mechanism that can directly benefit the learning needs of the

students?

4. What ways can be used to minimize the expense on the evaluation process but at the

same time make it successful?

5. Based on your personal experience in utilizing eLearning facilities, are there any

models that have been developed for evaluating the e-learning facilities from a student

perspective?

3.7.3 Secondary data sources

Materials from the library, internet and related research reports were used to

provide the required data and information concerning the research question. Internal

organization information sources were also analysed in order to obtain relevant data

regarding the research question. External data sources included information from various

eLearning publications, previous research studies and academic institution Secondary

data sources are instrumental in supporting data that has been collected in the primary

data session i.e. from the interviews and questionnaires (Vithal& Jansen, 1997).

68
Dissertation

3.8 Questionnaire reliability and validity

Research validity and reliability are very vital components in the undertaking of a

research. For instance, validity refers to the measure of truthfulness of a research and is

normally aimed at analyzing what the research is intended to measure. On the other hand,

reliability can be defined as the extent to which results are consistent over time.

Additionally, reliability entails various issues related to the accuracy presentation of the

population sample employed in the undertaking of the research. As postulated by Litwin

(1995), a research is considered to be reliable, if the same results can be replicated

elsewhere using the same methodology.

It was quite challenging in determining the reliability and validity levels of this

research a factor attributed to the availability of numerous approaches in measuring

validity and reliability in a research. As postulated by (Lincoln &Guba, 2000) However,

both quantitative and qualitative are all designed to understand and explain behavior and

events, consequences, corollaries, components, and antecedents. This means that,

components of both qualitative and quantitative can be used together. In order to enhance

higher levels of questionnaire validity and reliability, the design of the questionnaire was

based on various theories of questionnaire design. According to Litwin (1995) reliability

and validity of a questionnaire are important aspects to consider in the sense that, a

perfectly designed questionnaire should be able to elicit perfect responses from the

participants. However, developing a perfect questionnaire that can elicit perfect

responses is a complex process fraught with disappointments. The researcher with the

help of a panel of experts developed a simple and inclusive semi-structured questionnaire

69
Dissertation

that could elicit perfect responses from the participants. In designing the questionnaire,

the researcher and the panel of experts followed some seven basic principles in designing

the questionnaire (Bradburn, Sudman&Wansink, 2004). For instance, the researcher and

the panel of experts used precise terminologies in the design of the questionnaire, simple

language was used in the design of the questionnaire, Jargons, ambiguity and unnecessary

phrases were avoided in the design of the questionnaire. Also, the researcher and the

panel of experts avoided unwarranted assumptions and prejudice regarding the research

participants responses. Moreover, the researcher and the panel of experts ensured that,

conditional information preceded the main key points in the questions being asked. Also,

the researcher and the panel of experts avoided the use double-barrelled questions.

Double-barrelled questions are considered as questions that ask the participants more

than one question but provide an option for the participant to only give one answer. In

order to avoid the use of double-barrelled questions, the researcher used the following

five point Likert scale options (1 denotes strongly disagree, 2 denotes disagree, 3 denotes

neutral, 4 denotes agree, and 5 denotes strongly agree). Additionally, the researcher and

the panel of experts chose an appropriate response format for participants to provide their

responses. Finally, in order to enhance higher levels of validity and reliability of the

questionnaire, the researcher undertook a pilot study through a pilot study in order to test

the developed questionnaire with the aim of modifying the questionnaire. Additionally,

the researcher distributed the questionnaire to other people with diversified backgrounds

in eLearning in order to aid in the reviewing of the questionnaire that was developed.

Also, reliability was enhanced through administering the same set of questions that were

70
Dissertation

employed in the pilot study to the final research participants (Presser, Rothgeb, Couper,

Lessler, Martin & Singer,2004).

3.9 Design

This research paper employed the use of both the quantitative and the qualitative

research design in obtaining the responses of the research participants. Specifically, the

research paper employed an exploratory research in exploring the various parameters that

should be considered in evaluating eLearning facilities. As postulated by Little (2013),

the research types include but not limited to the following research types; descriptive,

exploratory, explanatory, comparative, evaluative and predictive research. According to

Little (2013), a researcher can effectively apply more than one research type in the

process of undertaking the research. A descriptive research can be defined as a research

aimed at analysing the characteristics of a phenomena being studied. Little (2013)

continue to posit that, an explanatory research design is well suited when analysing and

studying a research phenomenon that is not cleared stated.

An explanatory research on the other hand, is normally used in cases where there

are little studies regarding the subject area. On the other hand, a comparative research

design is aimed at making comparisons between two scenarios that are being studied. For

instance, as postulated by Adams (2007), an evaluative research design is aimed at

analysing and accessing the outcomes of a research phenomenon. Adams (2007) continue

to state, a predictive research can be described as a research design aimed at predicting

the outcome of a scenario based on some variables. This research paper employed the use

of both an explanatory research as well predictive research design. The explanatory

71
Dissertation

design was used in the sense that, little research has been done in the recent past,

regarding the various parameters that should be considered in evaluating eLearning

facilities. Additionally, a predictive design was used in which the research culminated in

the development of an eLearning framework that could be used in undertaking effective

evaluation of eLearning systems.

The research paper employed the use of ordinal variables in the sense that the

participants were required to rate their responses against a Likert scale type response.

Research variables can either be, continuous, dichotomous, categorical or ordinal.

Continuous variables include numerical outputs such that the values can take on any

number in a given range. Ordinal variables are variables that can take a set number of

values, such as a 1-5 Likert scale, but can only take those values and the order has

meaning. Categorical variables, such as race or gender, are variables where the output is

not a number or where the number used in the analysis does not align with a value of the

variables (Little, 2013).

3.10 Data Analysis

The collected data was analysed through the use of spearman rho coefficient and

one sample t-test analysis. Spearman rho coefficient was utilized in order to determine

the correlation between eLearning facility evaluation and students satisfaction. On the

other hand, the use of one sample t-test analysis was undertaken in order to analyze the

mean variation and the statistical significance of the participants responses. The adoption

or rejection of the null hypothesis was based on the t and p values obtained which were

used to determine the mean variation and statistical significance respectively.

72
Dissertation

3.10 Ethical considerations

According to Pimple (2008), research ethics are important in undertaking of a research

that involve human subjects. In order to conform to research best practices, the following

ethical considerations were considered in the process of undertaking of the research:

The participation of the human subjects was on a voluntary basis and was based

upon the participants signing the consent form. Also no monetary gains and tips

were given to the participants and that no favors were advanced to any participant.

In order to ensure that the participants were aware of the purpose, duration and

the objectives of the study, the participants were fully informed about the overall

purpose, objectives and the duration of the research that was undertaken. This

ensured that, the participants effectively filled the questionnaire appropriately and

within the specified time frame.

The participants were also guaranteed a data protection act in which the

participants data was solely used for the main purpose it was intended. No

participants data was used for any other reasons apart from the research

objectives.

73
Dissertation

4.0 CHAPTER FOUR: SUMMARY OF QUANTITATIVE RESULTS

Introduction

The research that was undertaken was aimed at evaluating the various ways

through which elearning facilities could be evaluated from a students perspectives in

order to develop and elearning framework that could be used in effective evaluation of

elearning facilities. In order to achieve the above objectives, the following research

questions were employed.

Research Questions

1. Is there a correlation between elearning evaluation and students satisfaction in using

elearning facilities?

2. In the absence of a uniform standard required evaluating an e-learning facility what

methods could be used in higher educational institutions?

3. With increasing focus on the assistance to be provided to learners using e-learning

facilities what are areas that need to be focused while developing the evaluation

mechanism that can directly benefit the learning needs of the students?

4. What ways can be used to minimize the expense on the evaluation process but at the

same time make it successful?

5. Are there are models that have been developed for evaluating the e-learning facilities

and if there are no standard models is it possible to develop an evaluating mechanism that

could be generalized?

74
Dissertation

Quantitative summaries

Research question 1: Is there a correlation between elearning evaluation and students

satisfaction in using elearning facilities?

Alternate Hypothesis: There is a positive correlation between elearning evaluation and

students satisfaction in using elearning facilities

Null Hypothesis: There is NO positive correlation between elearning evaluation and

students satisfaction in using elearning facilities

Correlations
Students satisfaction
Spearman's rho eLearning Correlation 1.000

evaluation Coefficient
Sig. (2-tailed) .
N 150
.180
.081

75
Dissertation

Research question 2: In the absence of a uniform standard required evaluating an e-

learning facility what methods could be used in higher educational institutions?

1. Student perception questionnaires

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.34 .622 .051

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.994 149 .000 -.660 -.76 -.56

76
Dissertation

2. Tools for measuring elearning duration and frequency of log-in, pages accessed,

user profile

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 3.73 .988 .081

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -15.708 149 .000 -1.267 -1.43 -1.11

77
Dissertation

Research question 3: With increasing focus on the assistance to be provided to learners

using e-learning facilities what are areas that need to be focused while developing the

evaluation mechanism that can directly benefit the learning needs of the students?

1. Individual learner variables (learning history, physical characteristics, learner

attitudes, motivation levels of learners and familiarity with technology)

Learning history

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.27 .757 .062

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.868 149 .000 -.733 -.86 -.61

Physical characteristics

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.43 .727 .059

78
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.541 149 .000 -.567 -.68 -.45

Learner attitudes

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.57 .628 .051

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -8.446 149 .000 -.433 -.53 -.33

Motivational levels of learners

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.49 .632 .052

79
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.951 149 .000 -.513 -.62 -.41

80
Dissertation

Familiarity with technology

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.41 .626 .051

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.479 149 .000 -.587 -.69 -.49

2. Learning environment variables (the physical learning environment, the subject

environment, institutional or organizational environment).

The physical learning environment

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.53 .610 .050

81
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.508 149 .000 -.473 -.57 -.37

82
Dissertation

The subject environment

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.60 .579 .047

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -8.457 149 .000 -.400 -.49 -.31

Institutional or organizational environment

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.55 .586 .048

83
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.474 149 .000 -.453 -.55 -.36

84
Dissertation

3. Contextual variables (soci-economic factors, geographical location, cultural

background, and the political context)

Socio-economic factors

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.48 .653 .053

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.760 149 .000 -.520 -.63 -.41

Geographical location

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.37 .747 .061

85
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -10.278 149 .000 -.627 -.75 -.51

Cultural background

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 3.47 .816 .067

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -23.000 149 .000 -1.533 -1.67 -1.40

Political factors

86
Dissertation

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 3.41 .779 .064

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -24.952 149 .000 -1.587 -1.71 -1.46

4. Usability and technological factors (connectivity levels, mode of delivery,

interactivity levels, the multimedia used, presentation and application proactivity)

Connectivity levels

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.53 .587 .048

87
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.871 149 .000 -.473 -.57 -.38

Mode of delivery

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.57 .548 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.541 149 .000 -.427 -.52 -.34

Interactivity levels

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.53 .552 .045

88
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -10.503 149 .000 -.473 -.56 -.38

Presentation

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.60 .543 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.015 149 .000 -.400 -.49 -.31

89
Dissertation

Application proactivity

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.48 .552 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.532 149 .000 -.520 -.61 -.43

Multimedia used

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.44 .549 .045

90
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.486 149 .000 -.560 -.65 -.47

5. Pedagogical variables (level of learner support systems, accessibility issues, level

of flexibility, assessment and evaluation, level of learner autonomy).

Level of learner support systems

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.43 .549 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.652 149 .000 -.567 -.66 -.48

Accessibility issues

91
Dissertation

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.51 .552 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -10.789 149 .000 -.487 -.58 -.40

92
Dissertation

Level of flexibility

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.48 .552 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.532 149 .000 -.520 -.61 -.43

Assessment and evaluation

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.49 .621 .051

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -10.123 149 .000 -.513 -.61 -.41

93
Dissertation

Level of learner autonomy

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.57 .549 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.675 149 .000 -.433 -.52 -.34

6. Security variables (data privacy, integrity, availability and confidentiality)

Data privacy

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.65 .531 .043

94
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -8.000 149 .000 -.347 -.43 -.26

95
Dissertation

Data integrity

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.61 .541 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -8.757 149 .000 -.387 -.47 -.30

Data availability

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.66 .529 .043

96
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -7.875 149 .000 -.340 -.43 -.25

97
Dissertation

Data confidentiality

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.63 .525 .043

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -8.707 149 .000 -.373 -.46 -.29

Research question 4: What ways can be used to minimize the expense on the evaluation

process but at the same time make it successful?

Undertaking constant evaluation for improvement

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.51 .540 .044

98
Dissertation

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.183 149 .000 -.493 -.58 -.41

Incorporating the relevant stakeholders in the evaluation

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.45 .538 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.438 149 .000 -.547 -.63 -.46

Development of effective evaluation objectives

99
Dissertation

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.42 .534 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -13.294 149 .000 -.580 -.67 -.49

100
Dissertation

Use of evaluation methods that covers all aspects of effective eLearning

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.45 .538 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.438 149 .000 -.547 -.63 -.46

Undertaking effective eLearning planning and control process

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.49 .540 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.639 149 .000 -.513 -.60 -.43

101
Dissertation

102
Dissertation

5.0 CHAPTER FIVE: DISCUSION OF RESULTS

Quantitative summaries

Research question 1: Is there a correlation between elearning evaluation and students

satisfaction in using elearning facilities?

Alternate Hypothesis: There is a positive correlation between elearning evaluation and

students satisfaction in using elearning facilities

Null Hypothesis: There is NO positive correlation between elearning evaluation and

students satisfaction in using elearning facilities

Correlations
Students satisfaction
Spearman's rho eLearning Correlation 1.000

evaluation Coefficient
Sig. (2-tailed) .
N 150
.180
.081

The above statistical table indicates that the r value was obtained to be 0.180

while the percentage of the p value percentage was obtained to bet 8.1%. The r value of

0.180 that was obtained from the above statistical measure implies that there is a positive

correlation between evaluation of eLearning and student satisfaction. However, the

correlation between eLearning evaluation and student satisfaction is considered as being

moderate in the sense that the r value obtained tends to move away from the zero value.

The p value percentage value that was obtained was 8.1%. The 8.1% value indicates that

103
Dissertation

the alternate hypothesis is true. For instance, based on the p value obtained, it means that

there 8.1% chance that undertaking random sampling will lead into a positive correlation

between eLearning and students satisfaction if the null hypothesis was true. This implies

that there is 91.9 percent chance that undertaking random sampling will produce a strong

positive correlation between eLearning evaluation and students satisfaction if the

alternate hypothesis was true. The above analysis implies that we reject the null

hypothesis and adopt the alternate hypothesis that there is a positive correlation between

eLearning evaluation and students satisfaction in using eLearning facilities.

Research question 2: In the absence of a uniform standard required evaluating an e-

learning facility what methods could be used in higher educational institutions?

1. Student perception questionnaires

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.34 .622 .051

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.994 149 .000 -.660 -.76 -.56

104
Dissertation

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.34. The mean score value of 4.34 translated to a rounded value of

4.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point that student perception questionnaires could be employed in evaluating

eLearning facilities. Additionally, from the sample statistics that was undertaken, the p

value was obtained to be 0.00 while the t value was obtained to be -12. 994. Based on the

mean value of the participants responses, we adopt the hypothesis that student perception

questionnaires can be used in effective evaluation of eLearning facilities. Moreover, the

statistical analysis that was undertaken indicates that the mean value of (4.34 0.622) is

lower than the test value of 5 that was selected. This implies a statistically difference of

0.66 (95% confidence interval, 0.56 to 0.76), t (149) = -12.994, p = 0.00.

105
Dissertation

2. Tools for measuring elearning duration and frequency of log-in, pages accessed,

user profile

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 3.73 .988 .081

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -15.708 149 .000 -1.267 -1.43 -1.11

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 3.73. The mean score value of 3.73 translated to a rounded value of

4.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point that tools for measuring elearning duration and frequency of log-in, pages

accessed, user profile could be employed in evaluating eLearning facilities. Additionally,

from the sample statistics that was undertaken, the p value was obtained to be 0.00 while

the t value was obtained to be -15.708. Based on the mean value of the participants

responses, we adopt the hypothesis that tools for measuring eLearning duration and

frequency of log-in, pages accessed, user profile could be employed in evaluating

106
Dissertation

eLearning facilities. Moreover, the statistical analysis that was undertaken indicates that

the mean value of (3.73 0.988) is lower than the test value of 5 that was selected. This

implies a statistically difference of 1.267 (95% confidence interval, 1.11 to 1.43), t (149)

= -15.708, p = 0.00.

Research question 3: With increasing focus on the assistance to be provided to learners

using e-learning facilities what are areas that need to be focused while developing the

evaluation mechanism that can directly benefit the learning needs of the students?

1. Individual learner variables (learning history, physical characteristics, learner

attitudes, motivation levels of learners and familiarity with technology)

Learning history

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.27 .757 .062

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.868 149 .000 -.733 -.86 -.61
From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.27. The mean score value of 4.27 translated to a rounded value of

4.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point that Individual learner variables (learning history) should be considered in

107
Dissertation

eLearning evaluation. Additionally, from the sample statistics that was undertaken, the p

value was obtained to be 0.00 while the t value was obtained to be -11. 868. Based on the

mean value of the participants responses, we adopt the hypothesis that Individual learner

variables (learning history, physical characteristics, learner attitudes, motivation levels of

learners and familiarity with technology) should be considered in eLearning evaluation.

Moreover, the statistical analysis that was undertaken indicates that the mean value of

(4.27 0.757) is lower than the test value of 5 that was selected. This implies a

statistically difference of 0.733 (95% confidence interval, 0.61 to 0.86), t (149) = -11.868,

p = 0.00.

Physical characteristics

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.43 .727 .059

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.541 149 .000 -.567 -.68 -.45
From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.43. The mean score value of 4.43 translated to a rounded value of

4.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point that Individual learner variables (physical characteristics) should be

108
Dissertation

considered in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-9.541. Based on the mean value of the participants responses, we adopt the hypothesis

that Individual learner variables (physical characteristics) should be considered in

eLearning evaluation. Moreover, the statistical analysis that was undertaken indicates that

the mean value of (4.43 0.727) is lower than the test value of 5 that was selected. This

implies a statistically difference of 0.567 (95% confidence interval, 0.45 to 0.68), t (149)

= -9.541, p = 0.00.

Learner attitudes

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.57 .628 .051

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -8.446 149 .000 -.433 -.53 -.33

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.57. The mean score value of 4.57 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

agreed to the point that Individual learner variables (Learner attitudes) should be

109
Dissertation

considered in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-8.446. Based on the mean value of the participants responses, we adopt the hypothesis

that Individual learner variables (Learner attitudes) should be considered in eLearning

evaluation. Moreover, the statistical analysis that was undertaken indicates that the mean

value of (4.57 0.628) is lower than the test value of 5 that was selected. This implies a

statistically difference of 0.433 (95% confidence interval, 0.33 to 0.53), t (149) = -8.446,

p = 0.00.

Motivational levels of learners

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.49 .632 .052

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.951 149 .000 -.513 -.62 -.41

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.49. The mean score value of 4.49 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

agreed to the point that Individual learner variables (Learner motivational levels) should

110
Dissertation

be considered in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

9.951. Based on the mean value of the participants responses, we adopt the hypothesis

that Individual learner variables (Learner motivational levels) should be considered in

eLearning evaluation. Moreover, the statistical analysis that was undertaken indicates that

the mean value of (4.49 0.632) is lower than the test value of 5 that was selected. This

implies a statistically difference of 0.513 (95% confidence interval, 0.41 to 0.62), t (149)

= -9.951, p = 0.00.

Familiarity with technology

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.41 .626 .051

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.479 149 .000 -.587 -.69 -.49

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.41. The mean score value of 4.41 translated to a rounded value of

4.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point Individual learner variables (Familiarity with technology) should be

111
Dissertation

considered in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-11.479. Based on the mean value of the participants responses, we adopt the hypothesis

that Individual learner variables (Familiarity with technology) should be considered in

eLearning evaluation. Moreover, the statistical analysis that was undertaken indicates that

the mean value of (4.41 0.626) is lower than the test value of 5 that was selected. This

implies a statistically difference of 0.587 (95% confidence interval, 0.49 to 0.69), t (149)

= -11.479, p = 0.00.

2. Learning environment variables (the physical learning environment, the subject

environment, institutional or organizational environment).

The physical learning environment

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.53 .610 .050

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.508 149 .000 -.473 -.57 -.37

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.53. The mean score value of 4.53 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

112
Dissertation

Likert scale. The value implies that most of the participants in the research undertaking

agreed to the point that learning environment variables (the physical learning

environment) could be employed in evaluating eLearning facilities. Additionally, from

the sample statistics that was undertaken, the p value was obtained to be 0.00 while the t

value was obtained to be -9.508. Based on the mean value of the participants responses,

we adopt the hypothesis that learning environment variables (the physical learning

environment) could be employed in evaluating eLearning facilities. Moreover, the

statistical analysis that was undertaken indicates that the mean value of (4.53 0.610) is

lower than the test value of 5 that was selected. This implies a statistically difference of

0.473 (95% confidence interval, 0.37 to 0.57), t (149) = -9.508, p = 0.00.

The subject environment

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.60 .579 .047

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -8.457 149 .000 -.400 -.49 -.31
From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.60. The mean score value of 4.60 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

113
Dissertation

strongly agreed to the point that learning environment variables (the subject environment)

could be employed in evaluating eLearning facilities. Additionally, from the sample

statistics that was undertaken, the p value was obtained to be 0.00 while the t value was

obtained to be -8.457. Based on the mean value of the participants responses, we adopt

the hypothesis that learning environment variables (the subject environment) could be

employed in evaluating eLearning facilities. Moreover, the statistical analysis that was

undertaken indicates that the mean value of (4.60 0.579) is lower than the test value of

5 that was selected. This implies a statistically difference of 0.40 (95% confidence

interval, 0.31 to 0.49), t (149) = -8.457, p = 0.00.

Institutional or organizational environment

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.55 .586 .048

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.474 149 .000 -.453 -.55 -.36

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.55. The mean score value of 4.55 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

114
Dissertation

agreed to the point that learning environment variables (institutional or organizational

environment) could be employed in evaluating eLearning facilities. Additionally, from

the sample statistics that was undertaken, the p value was obtained to be 0.00 while the

tvalue was obtained to be -9.474. Based on the mean value of the participants responses,

we adopt the hypothesis that learning environment variables (institutional or

organizational environment) could be employed in evaluating eLearning facilities.

Moreover, the statistical analysis that was undertaken indicates that the mean value of

(4.55 0.586) is lower than the test value of 5 that was selected. This implies a

statistically difference of 0.453 (95% confidence interval, 0.36 to 0.55), t (149) = -9.474,

p = 0.00.

3. Contextual variables (soci-economic factors, geographical location, cultural

background, and the political context)

Socio-economic factors

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.48 .653 .053

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.760 149 .000 -.520 -.63 -.41

115
Dissertation

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.48. The mean score value of 4.48 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point that contextual variables (socio-economic factors) could be employed in

evaluating eLearning facilities. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-9.760. Based on the mean value of the participants responses, we adopt the hypothesis

that contextual variable (socio-economic factors) could be employed in evaluating

eLearning facilities. Moreover, the statistical analysis that was undertaken indicates that

the mean value of (4.48 0.653) is lower than the test value of 5 that was selected. This

implies a statistically difference of 0.52 (95% confidence interval, 0.41 to 0.63), t (149) =

-9.760, p = 0.00.

Geographical location

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.37 .747 .061

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -10.278 149 .000 -.627 -.75 -.51

116
Dissertation

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.37. The mean score value of 4.37 translated to a rounded value of

4.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point that contextual variables (Geographical location) could be employed in

evaluating eLearning facilities. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-10.278. Based on the mean value of the participants responses, we adopt the hypothesis

that contextual variables (Geographical location) could be employed in evaluating

eLearning facilities. Moreover, the statistical analysis that was undertaken indicates that

the mean value of (4.37 0.747) is lower than the test value of 5 that was selected. This

implies a statistically difference of 0.627 (95% confidence interval, 0.51 to 0.75), t (149)

= -10.278, p = 0.00.

117
Dissertation

Cultural background

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 3.47 .816 .067

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -13.000 149 .000 -0.533 -0.67 -0.40

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 3.47. The mean score value of 3.47 translated to a rounded value of

4.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point that contextual variables (students cultural background) could be employed

in evaluating eLearning facilities. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-13.00. Based on the mean value of the participants responses, we adopt the hypothesis

that contextual variables (students cultural background) could be employed in evaluating

eLearning facilities. Moreover, the statistical analysis that was undertaken indicates that

the mean value of (3.47 0.816) is lower than the test value of 5 that was selected. This

118
Dissertation

implies a statistically difference of 0.533 (95% confidence interval, 0.40 to 0.67), t (149)

= -13.00, p = 0.00.

119
Dissertation

Political factors

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 3.41 .779 .064

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.002 149 .000 -0.587 -0.71 -0.46

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 3.41. The mean score value of 3.41 translated to a rounded value of

3.00 as indicated on the Likert scare type which denotes an Neutral point on the Likert

scale. The value implies that most of the participants in the research had neutral opinion

that contextual variables (political factors) could be employed in evaluating eLearning

facilities. Additionally, from the sample statistics that was undertaken, the p value was

obtained to be 0.00 while the t value was obtained to be -12. 002. Based on the mean

value of the participants responses, we can neither accept nor reject the hypothesis that

contextual variables (political factors) could be employed in evaluating eLearning

facilities. Moreover, the statistical analysis that was undertaken indicates that the mean

value of (4.41 0.779) is lower than the test value of 5 that was selected. This implies a

statistically difference of 0.587 (95% confidence interval, 0.46 to 0.71), t (149) = -12.000,

p = 0.00.

120
Dissertation

121
Dissertation

4. Usability and technological factors (connectivity levels, mode of delivery,

interactivity levels, the multimedia used, presentation and application proactivity)

Connectivity levels

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.53 .587 .048

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.871 149 .000 -.473 -.57 -.38

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.53. The mean score value of 4.53 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that usability and technological factors (connectivity levels)

should be considered in eLearning evaluation. Additionally, from the sample statistics

that was undertaken, the p value was obtained to be 0.00 while the t value was obtained

to be -9.871. Based on the mean value of the participants responses, we adopt the

hypothesis that usability and technological factors (connectivity levels) should be

considered in eLearning evaluation. Moreover, the statistical analysis that was undertaken

indicates that the mean value of (4.53 0.587) is lower than the test value of 5 that was

122
Dissertation

selected. This implies a statistically difference of 0.473 (95% confidence interval, 0.38 to

0.57), t (149) = -9.871, p = 0.00.

Mode of delivery

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.57 .548 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.541 149 .000 -.427 -.52 -.34

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.57. The mean score value of 4.57 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

agreed to the point that usability and technological factors (mode of delivery) should be

considered in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-9.541. Based on the mean value of the participants responses, we adopt the hypothesis

that usability and technological factors (mode of delivery) should be considered in

eLearning evaluation. Moreover, the statistical analysis that was undertaken indicates that

the mean value of (4.57 0.548) is lower than the test value of 5 that was selected. This

123
Dissertation

implies a statistically difference of 0.427 (95% confidence interval, 0.34 to 0.52), t (149)

= -9.541, p = 0.00.

Interactivity levels

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.53 .552 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -10.503 149 .000 -.473 -.56 -.38

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.53. The mean score value of 4.53 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

agreed to the point that usability and technological factors (level of interactivity) should

be considered in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-10.503. Based on the mean value of the participants responses, we adopt the hypothesis

that usability and technological factors (level of interactivity) should be considered in

eLearning evaluation. Moreover, the statistical analysis that was undertaken indicates that

the mean value of (4.53 0.552) is lower than the test value of 5 that was selected. This

124
Dissertation

implies a statistically difference of 0.473 (95% confidence interval, 0.38 to 0.56), t (149)

= -10.503, p = 0.00.

Presentation

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.60 .543 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.015 149 .000 -.400 -.49 -.31

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.60. The mean score value of 4.60 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that usability and technological factors (presentation) should

be considered in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-9.015. Based on the mean value of the participants responses, we adopt the hypothesis

that usability and technological factors (presentation) should be considered in eLearning

evaluation. Moreover, the statistical analysis that was undertaken indicates that the mean

value of (4.60 0.543) is lower than the test value of 5 that was selected. This implies a

125
Dissertation

statistically difference of 0.400 (95% confidence interval, 0.31 to 0.49), t (149) = -9.015

p = 0.00.

Application proactivity

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.48 .552 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.532 149 .000 -.520 -.61 -.43

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.48. The mean score value of 4.48 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking strongly

agreed to the point that usability and technological factors (application proactivity)

should be considered in eLearning evaluation. Additionally, from the sample statistics

that was undertaken, the p value was obtained to be 0.00 while the t value was obtained

to be -11.532. Based on the mean value of the participants responses, we adopt the

hypothesis that usability and technological factors (application proactivity) should be

considered in eLearning evaluation. Moreover, the statistical analysis that was undertaken

indicates that the mean value of (4.48 0.552) is lower than the test value of 5 that was

126
Dissertation

selected. This implies a statistically difference of 0.520 (95% confidence interval, 0.43 to

0.61), t (149) = -11.532, p = 0.00.

Multimedia used

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.44 .549 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.486 149 .000 -.560 -.65 -.47

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.44. The mean score value of 4.44 translated to a rounded value of

4.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point that usability and technological factors (multimedia used) should be

considered in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be -12.

486. Based on the mean value of the participants responses, we adopt the hypothesis that

usability and technological factors (multimedia used) should be considered in eLearning

evaluation. Moreover, the statistical analysis that was undertaken indicates that the mean

value of (4.44 0.549) is lower than the test value of 5 that was selected. This implies a

127
Dissertation

statistically difference of 0.56 (95% confidence interval, 0.47 to 0.65), t (149) = -12.486,

p = 0.00.

5. Pedagogical variables (level of learner support systems, accessibility issues, level

of flexibility, assessment and evaluation, level of learner autonomy).

Level of learner support systems

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.43 .549 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.652 149 .000 -.567 -.66 -.48

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.43. The mean score value of 4.43 translated to a rounded value of

4.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point that pedagogical variables (level of learner support systems) should be used

in eLearning evaluation. Additionally, from the sample statistics that was undertaken, the

p value was obtained to be 0.00 while the t value was obtained to be -12. 652. Based on

the mean value of the participants responses, we adopt the hypothesis that pedagogical

variables (level of learner support systems) should be used in eLearning evaluation.

128
Dissertation

Moreover, the statistical analysis that was undertaken indicates that the mean value of

(4.43 0.549) is lower than the test value of 5 that was selected. This implies a

statistically difference of 0.567 (95% confidence interval, 0.48 to 0.66), t (149) = -12.652,

p = 0.00.

Accessibility issues

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.51 .552 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -10.789 149 .000 -.487 -.58 -.40

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.51. The mean score value of 4.51 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that pedagogical variables (level of eLearning accessibility)

should be used in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-10.789. Based on the mean value of the participants responses, we adopt the hypothesis

that pedagogical variables (level of eLearning accessibility) should be used in eLearning

129
Dissertation

evaluation. Moreover, the statistical analysis that was undertaken indicates that the mean

value of (4.51 0.552) is lower than the test value of 5 that was selected. This implies a

statistically difference of 0.487 (95% confidence interval, 0.40 to 0.58), t (149) = -10.789,

p = 0.00.

Level of flexibility

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.48 .552 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.532 149 .000 -.520 -.61 -.43

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.48. The mean score value of 4.48 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that pedagogical variables (level of eLearning flexibility)

should be used in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-11.532. Based on the mean value of the participants responses, we adopt the hypothesis

that pedagogical variables (level of eLearning accessibility) should be used in eLearning

130
Dissertation

evaluation. Moreover, the statistical analysis that was undertaken indicates that the mean

value of (4.48 0.552) is lower than the test value of 5 that was selected. This implies a

statistically difference of 0.520 (95% confidence interval, 0.43 to 0.61), t (149) = -11.532,

p = 0.00.

Assessment and evaluation

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.49 .621 .051

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -10.123 149 .000 -.513 -.61 -.41

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.49. The mean score value of 4.49 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

agreed to the point that pedagogical variables (assessment and evaluation) should be used

in eLearning evaluation. Additionally, from the sample statistics that was undertaken, the

p value was obtained to be 0.00 while the t value was obtained to be -10.123. Based on

the mean value of the participants responses, we adopt the hypothesis that pedagogical

variables (assessment and evaluation) should be used in eLearning evaluation. Moreover,

131
Dissertation

the statistical analysis that was undertaken indicates that the mean value of (4.49 0.621)

is lower than the test value of 5 that was selected. This implies a statistically difference of

0.513 (95% confidence interval, 0.41 to 0.61), t (149) = -10.123, p = 0.00.

132
Dissertation

Level of learner autonomy

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.57 .549 .045

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -9.675 149 .000 -.433 -.52 -.34

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.57. The mean score value of 4.57 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that pedagogical variable (level of learner autonomy) should

be used in eLearning evaluation. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-9.675. Based on the mean value of the participants responses, we adopt the hypothesis

that pedagogical variable (level of learner autonomy) should be used in eLearning

evaluation. Moreover, the statistical analysis that was undertaken indicates that the mean

value of (4.57 0.549) is lower than the test value of 5 that was selected. This implies a

statistically difference of 0.433 (95% confidence interval, 0.34 to 0.52), t (149) = -9.675,

p = 0.00.

133
Dissertation

134
Dissertation

6. Security variables (data privacy, integrity, availability and confidentiality)

Data privacy

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.65 .531 .043

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -8.000 149 .000 -.347 -.43 -.26

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.65. The mean score value of 4.65 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that security variables (data privacy) should be used in

evaluating eLearning facilities. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-8.000. Based on the mean value of the participants responses, we adopt the hypothesis

that security variables (data privacy) should be used in evaluating eLearning facilities.

Moreover, the statistical analysis that was undertaken indicates that the mean value of

(4.65 0.531) is lower than the test value of 5 that was selected. This implies a

135
Dissertation

statistically difference of 0.347 (95% confidence interval, 0.26 to 0.43), t (149) = -8.00, p

= 0.00.

Data integrity

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.61 .541 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -8.757 149 .000 -.387 -.47 -.30

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.61. The mean score value of 4.61 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that security variables (data integrity) should be used in

evaluating eLearning facilities. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-8.757. Based on the mean value of the participants responses, we adopt the hypothesis

that security variables (data integrity) should be used in evaluating eLearning facilities.

Moreover, the statistical analysis that was undertaken indicates that the mean value of

(4.61 0.541) is lower than the test value of 5 that was selected. This implies a

136
Dissertation

statistically difference of 0.387 (95% confidence interval, 0.30 to 0.47), t (149) = -8.757,

p = 0.00.

Data availability

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.66 .529 .043

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -7.875 149 .000 -.340 -.43 -.25

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.66. The mean score value of 4.66 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that security variables (data availability) should be used in

evaluating eLearning facilities. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-7.875. Based on the mean value of the participants responses, we adopt the hypothesis

that security variables (data availability) should be used in evaluating eLearning facilities.

Moreover, the statistical analysis that was undertaken indicates that the mean value of

(4.66 0.529) is lower than the test value of 5 that was selected. This implies a

137
Dissertation

statistically difference of 0.340 (95% confidence interval, 0.25 to 0.43), t (149) = -7.875,

p = 0.00.

Data confidentiality

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.63 .525 .043

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -8.707 149 .000 -.373 -.46 -.29

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.63. The mean score value of 4.63 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking strongly

agreed to the point that security variables (data confidentiality) should be used in

evaluating eLearning facilities. Additionally, from the sample statistics that was

undertaken, the p value was obtained to be 0.00 while the t value was obtained to be

-8.707. Based on the mean value of the participants responses, we adopt the hypothesis

that security variables (data confidentiality) should be used in evaluating eLearning

facilities. Moreover, the statistical analysis that was undertaken indicates that the mean

value of (4.63 0.525) is lower than the test value of 5 that was selected. This implies a

138
Dissertation

statistically difference of 0.373 (95% confidence interval, 0.29 to 0.46), t (149) = -8.707,

p = 0.00.

Research question 4: What ways can be used to minimize the expense on the evaluation

process but at the same time make it successful?

Undertaking constant evaluation for improvement

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.51 .540 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.183 149 .000 -.493 -.58 -.41

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.51. The mean score value of 4.51 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that undertaking constant evaluation for eLearning

improvement can significantly reduce the expense incurred in evaluating eLearning

facilities. Additionally, from the sample statistics that was undertaken, the p value was

obtained to be 0.00 while the t value was obtained to be -11.183. Based on the mean

139
Dissertation

value of the participants responses, we adopt the hypothesis that undertaking constant

evaluation for eLearning improvement can significantly reduce the expense incurred in

evaluating eLearning facilities. Moreover, the statistical analysis that was undertaken

indicates that the mean value of (4.51 0.540) is lower than the test value of 5 that was

selected. This implies a statistically difference of 0.493 (95% confidence interval, 0.41 to

0.58), t (149) = -11.183, p = 0.00.

Incorporating the relevant stakeholders in the evaluation

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.45 .538 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.438 149 .000 -.547 -.63 -.46

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.45. The mean score value of 4.45 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that incorporating the relevant stakeholders in the

evaluationcan significantly reduce the expense incurred in evaluating eLearning facilities.

Additionally, from the sample statistics that was undertaken, the p value was obtained to

140
Dissertation

be 0.00 while the t value was obtained to be -12. 438. Based on the mean value of the

participants responses, we adopt the hypothesis that incorporating the relevant

stakeholders in the evaluationcan significantly reduce the expense incurred in evaluating

eLearning facilities. Moreover, the statistical analysis that was undertaken indicates that

the mean value of (4.45 0.538) is lower than the test value of 5 that was selected. This

implies a statistically difference of 0.547 (95% confidence interval, 0.46 to 0.63), t (149)

= -12.438, p = 0.00.

Development of effective evaluation objectives

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.42 .534 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -13.294 149 .000 -.580 -.67 -.49

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.42. The mean score value of 4.42 translated to a rounded value of

4.00 as indicated on the Likert scare type which denotes an Agree point on the Likert

scale. The value implies that most of the participants in the research undertaking agreed

to the point that development of effective eLearning evaluation objectives can

significantly reduce the expense incurred in evaluating eLearning facilities. Additionally,

141
Dissertation

from the sample statistics that was undertaken, the p value was obtained to be 0.00 while

the t value was obtained to be -13.294. Based on the mean value of the participants

responses, we adopt the hypothesis that development of effective eLearning evaluation

objectives can significantly reduce the expense incurred in evaluating eLearning

facilities. Moreover, the statistical analysis that was undertaken indicates that the mean

value of (4.42 0.534) is lower than the test value of 5 that was selected. This implies a

statistically difference of 0.580 (95% confidence interval, 0.49 to 0.67), t (149) = -13.294,

p = 0.00.

Use of evaluation methods that covers all aspects of effective eLearning

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.45 .538 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -12.438 149 .000 -.547 -.63 -.46

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.45. The mean score value of 4.45 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

142
Dissertation

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that use of evaluation methods that covers all aspects of

effective eLearning can significantly reduce the expense incurred in evaluating eLearning

facilities. Additionally, from the sample statistics that was undertaken, the p value was

obtained to be 0.00 while the t value was obtained to be -12. 438. Based on the mean

value of the participants responses, we adopt the hypothesis use of evaluation methods

that covers all aspects of effective eLearning can significantly reduce the expense

incurred in evaluating eLearning facilities. Moreover, the statistical analysis that was

undertaken indicates that the mean value of (4.45 0.538) is lower than the test value of

5 that was selected. This implies a statistically difference of 0.547 (95% confidence

interval, 0.46 to 0.63), t (149) = -12.438, p = 0.00.

Undertaking effective eLearning planning and control process

One-Sample Statistics
Std. Std. Error

N Mean Deviation Mean


RES 150 4.49 .540 .044

One-Sample Test
Test Value = 5
95% Confidence Interval of
Mean
the Difference
t df Sig. (2-tailed) Difference Lower Upper
RES -11.639 149 .000 -.513 -.60 -.43

143
Dissertation

From the above statistical analysis, the mean score of the participants one sample

t-test was found to be 4.49. The mean score value of 4.49 translated to a rounded value of

5.00 as indicated on the Likert scare type which denotes a Strongly Agree point on the

Likert scale. The value implies that most of the participants in the research undertaking

strongly agreed to the point that undertaking effective eLearning planning and control

process significantly reduces the costs incurred in eLearning evaluation. Additionally,

from the sample statistics that was undertaken, the p value was obtained to be 0.00 while

the t value was obtained to be -11.639. Based on the mean value of the participants

responses, we adopt the hypothesis that undertaking effective eLearning planning and

control process significantly reduces the costs incurred in eLearning evaluation.

Moreover, the statistical analysis that was undertaken indicates that the mean value of

(4.49 0.540) is lower than the test value of 5 that was selected. This implies a

statistically difference of 0.513(95% confidence interval, 0.43 to 0.60), t (149) = -11.639,

p = 0.00.

Interview Questions Thematic Analysis

As earlier stated, the research also employed the use of interviews in collecting

primary data from the research participants. For instance, a total of 5 interviews were

undertaken in order to determine the students responses regarding evaluation of

eLearning facilities from a student perspective. The following were the interview

questions that were asked and the students responses regarding the same.

Student one:

144
Dissertation

1. In your own opinion do you think there is a positive correlation between evaluating

eLearning facilities from a student perspective and the level of student satisfaction?

Yes, there is a positive correlation between evaluation of eLearning facilities

from a student perspective and the level of students satisfaction.

2. What other methods are institutions of higher education employing in undertaking

evaluation of eLearning facilities?

I dont know the exact term to use for I am not an IT guru but I have heard that

some eLearning systems have capabilities to record and store user data which

can be used for evaluating eLearning facilities.

3. From your own personal perspective, what are areas that need to be focused while

developing the evaluation mechanism that can directly benefit the learning needs of the

students?

There are so many areas that need to be focused in order to develop an effective

eLearning evaluation system that meets the students need. For instance, there is

need to focus on areas such as the level of system interactivity, the ease of use of

the eLearning facility and the motivation level of students to use the facility.

Additionally, other factor that should be considered in the development of

eLearning evaluation facilities is the level of security in the eLearning facility.

4. What ways can be used to minimize the expense on the evaluation process but at the

same time make it successful?

I have no idea on that one.

145
Dissertation

5. Based on your personal experience in utilizing eLearning facilities, are there any

models that have been developed for evaluating the e-learning facilities from a student

perspective?

NO

Student two:

1. In your own opinion do you think there is a positive correlation between evaluating

eLearning facilities from a student perspective and the level of student satisfaction?

Evaluating eLearning facilities from a students perspective leads to high levels

of students satisfaction in utilizing the eLearning facility.

2. What other methods are institutions of higher education employing in undertaking

evaluation of eLearning facilities?

Most institutions use online surveys after an eLearning session or printed

surveys to determine students perceptions regarding eLearning systems.

3. From your own personal perspective, what are areas that need to be focused while

developing the evaluation mechanism that can directly benefit the learning needs of the

students?

factors that should be considered in eLearning evaluations are; the multimedia

icons used, the security level in the facility, the learner ability and disability

levels, the learning environment in which the facility is being utilized, how

flexible is the eLearning facility and the cost of evaluation comparatively to the

benefits.

4. What ways can be used to minimize the expense on the evaluation process but at the

same time make it successful?

146
Dissertation

Effectively planning for constant evaluation of eLearning in order to identify

problems and fix them.

5. Based on your personal experience in utilizing eLearning facilities, are there any

models that have been developed for evaluating the e-learning facilities from a student

perspective?

NO

Student three:

1. In your own opinion do you think there is a positive correlation between evaluating

eLearning facilities from a student perspective and the level of student satisfaction?

No, I dont think there is a positive correlation between students satisfaction

levels and eLearning facility evaluation.

2. What other methods are institutions of higher education employing in undertaking

evaluation of eLearning facilities?

Use of student surveys

3. From your own personal perspective, what are areas that need to be focused while

developing the evaluation mechanism that can directly benefit the learning needs of the

students?

Students learning abilities, how easy it is to use the eLearning system, the

availability of support activities, the environment and the subject in which the

eLearning is delivered, the level of students data security and the extent to which

learners can interact with the system.

4. What ways can be used to minimize the expense on the evaluation process but at the

same time make it successful?

147
Dissertation

Constant reviews and fixing of eLearning system problems

5. Based on your personal experience in utilizing eLearning facilities, are there any

models that have been developed for evaluating the e-learning facilities from a student

perspective?

NO

Student four:

1. In your own opinion do you think there is a positive correlation between evaluating

eLearning facilities from a student perspective and the level of student satisfaction?

I can categorically state that there is a correlation in the sense that undertaking

the evaluation of eLearning facilities will eventually lead to students being

satisfied with the eLearning facility.

2. What other methods are institutions of higher education employing in undertaking

evaluation of eLearning facilities?

Embedded data analytics systems in the eLearning system

3. From your own personal perspective, what are areas that need to be focused while

developing the evaluation mechanism that can directly benefit the learning needs of the

students?

some of the areas that need to be considered are areas related to usability of the

system, learner abilities, security issues, diversity of learner backgrounds, if

learners are motivated to embrace technology, how learning content will be

delivered and the learning environment that the eLearning will be utilized as well

as the subjects being taught.

148
Dissertation

4. What ways can be used to minimize the expense on the evaluation process but at the

same time make it successful?

Undertaking monitoring and evaluation of eLearning systems

5. Based on your personal experience in utilizing eLearning facilities, are there any

models that have been developed for evaluating the e-learning facilities from a student

perspective?

NO

Student five:

1. In your own opinion do you think there is a positive correlation between evaluating

eLearning facilities from a student perspective and the level of student satisfaction?

Yes, there is. Look at it in this terms, if students are experiencing difficulties in

the use of the eLearning facility, then undertaking an evaluation from the student

perspective will provide an understanding of the various problems faced by

students and later fixing the problems which will eventually lead to higher levels

of students satisfaction.

2. What methods are institutions of higher education employing in undertaking evaluation

of eLearning facilities?

According to me I would think most higher education institutions employ the use

of surveys that ask the students to illustrate their experience with the eLearning

system.

3. From your own personal perspective, what are areas that need to be focused while

developing the evaluation mechanism that can directly benefit the learning needs of the

students?

149
Dissertation

How the students will use the facility, whether the facility will be quite easy to

use, the multimedia content that will be embedded in the system, interactivity

tools to be used, the level of eLearning security, and the subject being taught as

well as the environment in which the eLearning is taking place.

4. What ways can be used to minimize the expense on the evaluation process but at the

same time make it successful?

Invest heavily in high technology eLearning systems.

5. Based on your personal experience in utilizing eLearning facilities, are there any

models that have been developed for evaluating the e-learning facilities from a student

perspective? Please reply with a NO or a Yes

NO

Interview thematic analysis

Question 1

Table indicating frequency appearance of common themes

Common Theme Frequency count


There is a positive correlation 4
There is no positive correlation 1
Total 5

A Pie chart indicating participants responses

150
Dissertation

no positive correlation; 20%

positive correlation no positive correlation


positive correlation; 80%

From the above pie chart, it is evident that 80% of the research participants

indicated that there is a positive correlation between eLearning evaluation and students

satisfaction in the sense that eLearning evaluation leads to higher levels of students

satisfaction. On the other hand, 20% of the respondents indicated that there is no positive

correlation between eLearning evaluation and students satisfaction since eLearning

facility evaluation does not lead to higher levels of students satisfaction. The above

interview results are in line with the questionnaire responses in which most of the

participants indicated that there is a positive correlation between eLearning facility

evaluation and students satisfaction.

Question 2

Table indicating frequency appearance of common themes

Common Theme Frequency count


Use of students survey 3
Use of eLearning data 2

embedded technologies
Total 5

151
Dissertation

A Pie chart indicating participants responses

eLearning data embeded systems; 40%


students surveys; 60%
students surveys eLearning data embeded systems

152
Dissertation

From the above chart representation, 60% of the interviewee indicated that most

higher education institutes employ the use of students surveys in undertaking eLearning

facility evaluation. On the other hand, 40% of the interviewee indicated that most higher

education institutes employ the use of eLearning data embedded systems in undertaking

evaluation of eLearning facility. The above results are consistent with data obtained from

questionnaire analysis in which most of the participants indicated that the use ofstudents

surveys was the most common eLearning facility evaluation method employed by higher

education institutes.Question 3:

Table indicating frequency appearance of common themes

Common Theme Frequency count


System interactivity 3
Ease of use 4
Motivational levels 2
security 5
Multimedia used 2
Learner ability 3
Learner environment 4
Cost of evaluation 1
eLearning flexibility 2
Availability of support 1
Learner background 1
Mode of delivery 1
Total 29

153
Dissertation

A Pie chart indicating participants responses

system interactivity learnerlevels


ease of use
motivational background;
security 3% multimedia used learner ability learner environment cost of evaluation
mode of delivery; 3%
support availability; 3%
system interactivity; 10%
eLearning flexibility; 7%
ease of use; 14%
cost of evaluation; 3%
motivational levels; 7%
learner environment; 14%
learner ability; 10%
security; 17%
multimedia used; 7%

eLearning flexibility support availability learner background mode of delivery

154
Dissertation

From the above pie chart 18% of the interview participants responses indicated

that security was the major parameter that should be considered in eLearning facility

evaluation. Moreover, 15% indicated ease of use, 14% indicated learner environment,

10% learning environment, 10% system interaction, 7% motivational levels of learners,

7% multimedia used, 3% cost of evaluation, 3% flexibility, 3% availability of support,

3% learner background and 3% mode of delivery. The above research findings are

consisntent with the analysis that was undertaken through the use of participants

questionnaire responses. Question 4

Common Theme Frequency count


Undertake constant eLearning 3

reviews, monitoring and

evaluation
Invest in state of the art 1

eLearning technology.
TOTAL 4

155
Dissertation

invest in state of the art technology

invest in state of the art technology; 25%


constant eLearning reviews, evaluation and monitoring; 75%

constant eLearning reviews, evaluation and monitoring

156
Dissertation

From the above graphical representation, 75% of the interview respondents

indicated that undertaking constant reviews, evaluation and monitoring of eLearning

facilities was an effective way to minimize costs incurred in eLearning evaluation. On the

other hand, 25% of the participants stated that investing in state of the art eLearning

technologies was an effective strategy to minimize eLearning evaluation costs.

undertaking constant reviews, evaluation and monitoring of eLearning facilities was an

effective way to minimize costs incurred in eLearning evaluation is consistent with the

questionnaire response analysis that were undertaken in which the participants indicated

that undertaking constant reviews, evaluation and monitoring of eLearning facilities was

an effective way to minimize costs incurred in eLearning evaluation.

Question5

Common Theme Frequency count


NO 5
Total 5

From the above table, it is evident that all the participants that were interviewed

indicated that there is no standard framework that can be used in undertaking evaluation

of eLearning facilities from a student perspective. The above analysis implies that there is

no standard framework that can be utilized in undertaking evaluation of eLearning

facilities from a student perspective.

Development of an eLearning evaluation framework

157
Dissertation

Research question 5: Are there are models that have been developed for evaluating the

e-learning facilities and if there are no standard models is it possible to develop an

evaluating mechanism that could be generalized?

In order to develop an effective eLearning evaluation framework, the various

components that should be considered in eLearning evaluation were benchmarked based

on a rounded value of 4.00 which denotes an Agree point as rated on the Likert scale. The

value of 4.00 was selected in the sense that it denotes an Agree which implies that most

participants agreed to point in question.

158
Dissertation

Based on the benchmarked value of 4.00 the following table indicates a summary of the

eLearning evaluation parameters that were included and excluded from the development

of an eLearning framework.

eLearning eLearning sub-category Students Included in excluded in


category response the the
development development
of eLearning of eLearning
evaluation evaluation
framework framework
Individual Learning history 4.27 YES
leaner Physical characteristics 4.43 YES
variables Learner attitudes 4.57 YES
Motivational level 4.49 YES
Familiarity with 4.41 YES
technology
Learning Physical learning 4.53 YES
environment environment
variables Subject environment 4.60 YES
Institutional/organizationa 4.55 YES
l factors
Contextual Socio-economic factors 4.48 YES
variables Geographical factors 4.37 YES
Cultural background 3.47 YES
Political factors 3.41 NO
Usability connectivity 4.53 YES
factors interactivity 4.53 YES
Mode of delivery 4.57 YES
presentation 4.60 YES
Application proactivity 4.48 YES

Multimedia used 4.44 YES


Pedagogical Level of learner support 4.43 YES
factors Accessibility level 4.51 YES
Flexibility 4.48 YES
Assessment and 4.49 YES
evaluation
Learner autonomy 4.57 YES
Security Data privacy 4.65 YES
Data integrity 4.61 YES
Data availability 4.66 YES
Data confidentiality 4.63 YES

159
Individual learner variables (learning history, physical characteristics,Contextual
learner attitudes, motivational levels and familiarity with technology
variables
(soci-economicfactors, geographical location, cultural background)
Proposed eLearning facilities evaluation framework

vironment variables (the physical learning environment, the subject environment, institutional or organizational environment)
Security variables (data privacy, integrity, availability and
E-LEARNING FACILITIES EVALUATION

Pedagogical variables (level of learner support systems, accessibility issues, level of flexibility, assessment and evalua
Usability and technological factors (connectivity levels, mode of delivery, interactivity levels, the multimedia used, presentation and application proactivity

160
6.0 Conclusion and Recommendations

The research that was undertaken was aimed at evaluating the various constructs and

parameters that should be considered in undertaking eLearning facility evaluation. Moreover, the

research was mainly focused on determining the various ways through which eLearning

evaluation can be undertaken in order to reduce the costs associated with evaluating eLearning

facilities. From the research that was undertaken, results indicated that there is a strong positive

correlation between eLearning evaluation and the level of students satisfaction. Undertaking

eLearning evaluation significantly leads to higher levels of students satisfaction. Additionally,

from the research that was undertaken, results of the study indicate that some of the factors that

should be considered in undertaking eLearning evaluation include the following; Individual

learner variables (learning history, physical characteristics, learner attitudes, motivation levels of

learners and familiarity with technology), Learning environment variables (the physical learning

environment, the subject environment, institutional or organizational environment), Contextual

variables (socio-economic factors, geographical location, cultural background, and the political

context), Usability and technological factors (connectivity levels, mode of delivery, interactivity

levels, the multimedia used, presentation and application proactivity), Pedagogical variables

(level of learner support systems, accessibility issues, level of flexibility, assessment and

evaluation, level of learner autonomy), and Security variables (data privacy, integrity, availability

and confidentiality). Additionally, the results of study indicate that most higher education

institutes employ the use of students surveys and inbuilt data analytics tools that are used to

measure user profile information and usage. Also, the result of the study indicates that some

ways through which higher education institutes can minimize the costs incurred in undertaking

eLearning evaluation include the following; Undertaking effective eLearning planning and

161
control process, use of evaluation methods that covers all aspects of effective eLearning,

development of effective evaluation objectives, incorporating the relevant stakeholders in the

evaluation, and undertaking constant evaluation for improvement.

Recommendations

The research that was undertaken was based on data collection from a total 155 students

(150 questionnaire respondents and 5 interview respondents). This represents a small population

sample, bearing the large number of students who are utilizing eLearning systems. This is a

major limitation depicted in the study and there is need in future to undertake a research that is

global in nature and covers a large student population throughout the country. Secondly, the

developed framework does not exclusively cover all the components needed to undertake

effective eLearning facility evaluation. There is a likelihood of certain variables changing due to

the dynamics that are being experienced in the information and communication industry. This

implies that the framework can be modified to represent the changes that are being experienced

in the information and communication technology.

162
References:

Adams, J. (2007). Research methods for graduate business and social science students.
New Delhi: SAGE Publications.

Albon, R., & Trinidad, S. (2002). Building learning communities through technology.
Unpublished paper, Curtin University of Technology.
Aldridge, J., Fraser, B., Fisher, D., Trinidad, S., & Wood, D. (2003, April). Monitoring the
success of an outcomes-based, technology-rich learning environment. In annual meeting
of the American educational research association, April, Chicago, IL.
Alliger, G. M., &Janak, E. A. (1989). Kirkpatrick's levels of training criteria: Thirty years
later. Personnel psychology, 42(2), 331-342.
Alsabawy, A. Y., Cater-Steel, A., & Soar, J. (2013). ELearning Service Delivery
Quality. Learning management systems and instructional design: Best practices in online
education, 89.
Ardito, C., Costabile, M. F., De Marsico, M., Lanzilotti, R., Levialdi, S., Roselli, T., &Rossano,
V. (2006).An approach to usability evaluation of e-learning applications. Universal
access in the information society, 4(3), 270-283.
Babbie, E. (2010) The Practice of Social Research. 12thedn. Belmont: Wadsworth

Belk, R. W. (2006). Handbook of qualitative research methods in marketing.


Cheltenham, UK: Edward Elgar.
Bergstedt, S., Wiegreffe, S., Wittmann, J., &Mller, D. (2003, July). Content management
systems and e-learning systems-a symbiosis?. In Advanced Learning Technologies, 2003.
Proceedings. The 3rd IEEE International Conference on (pp. 155-159). IEEE.
Beyth-Marom, R., Chajut, E., Roccas, S., &Sagiv, L. (2003). Internet-assisted versus traditional
distance learning environments: factors affecting students preferences. Computers &
Education, 41(1), 65-76
Boverie, P., Mulcahy, D. S., &Zondlo, J. A. (1994). Evaluating the effectiveness of training
programs. The 1994 annual: Developing human resources, 279-293.
Bradburn, N., Sudman, S., &Wansink, B. (2004).Asking Questions: The Definitive
Guide to Questionnaire Design. Jossey-Bass
Bregman, P., & Jacobson, H. (2000). Yes, You Can Measure the Business Results of
Training. Training, 37(8), 68-72.
Bryman, A. (2004). Social Research Methods, Second Edition. Oxford: Oxford
University Press, pp. 1-60
Byun, J., &Loh, C. S. (2015). Audial engagement: Effects of game sound on learner engagement
in digital game-based learning environments. Computers in Human Behavior, 46, 129-
138.
Callister, T. A., &Burbules, N. C. (1990). Computer literacy programs in teacher education:
What teachers really need to learn. Computers & Education, 14(1), 3-7.
Cantoni, L., &Rega, I. (2004). Looking for fixed stars in the eLearning community: a research
on referenced literature in SITE Proceeding Books from 1994 to 2001. In EdMedia:
World Conference on Educational Media and Technology (Vol. 2004, No. 1, pp. 4697-
4704).
163
Centers for Disease Control and Prevention (CDC). (2008). Data Collection Methods for
Program Evaluation: Questionnaires. Evaluation Briefs, available at
http://www.cdc.gov/healthyyouth/evaluation/pdf/brief14.pdf
Chimalakonda, S. (2010). Towards Automating the Development of a family of eLearning
Systems (Doctoral dissertation, International Institute of Information Technology
Hyderabad, India).
Christmann, E. P., &Badgett, J. L. (2003). A meta-analytic comparison of the effects of
computer-assisted instruction on elementary students' academic
achievement. Information Technology in Childhood Education Annual, 91-104.
Cox, M. J. (2013). Formal to informal learning with IT: research challenges and issues for e
learning. Journal of computer assisted learning, 29(1), 85-105.
Crawford, C. M., Gannon-Cook, R., &Rudnicki, A. (2002). Perceived and actual interactive
activities in elearning environments. In Proceedings of World Conference on E-Learning
in Corp., Govt., Health, & Higher Ed (pp. 917-920).
Creswell, J. W. (1994). Research Design: Qualitative & Quantitative Approaches.
California: Sage Publications, Chapter 1, pp. 1-17.
Deepwell, F. (2007).Embedding quality in e-learning implementation through evaluation.
Educational Technology & Society, 10(2), 34-43.
Denscombe, M. (2010).Ground rules for social research: Guidelines for good practice.
Maidenhead: Open University Press
Denzin, N. K., & Lincoln, Y. S. (Eds.).(1994). Handbook of qualitative research.
Thousand Oaks, CA: Sage
DeRouin, R. E., Fritzsche, B. A., & Salas, E. (2004). Optimizing elearning: Researchbased
guidelines for learnercontrolled training. Human Resource Management, 43(23), 147-
162.
Devedi, V., Jovanovi, J., &Gaevi, D. (2007). The pragmatics of current e-learning
standards. Internet Computing, IEEE, 11(3), 19-27.
Ewing-Taylor, J. (1999). Student attitude toward web-based courses.Retrieved August, 23, 2005.
Ghauri, P. N. (2004). Designing and Conducting Case Studies in International Business
Research, in Marschan-Piekkari, R./Welch, C. (eds.), Handbook of Qualitative
Research Methods for International Business, Cheltenham: Edward Elgar, pp.
109124
Gliner, J.A & Morgan, G.A. (2000).Research Methods in Applied Settings: An
Integrated Approach to Design and Analysis. London: Routledge
Govindasamy, T. (2001). Successful implementation of e-learning: Pedagogical considerations.
The internet and higher education, 4(3), 287-299.
Greenhow, C., Robelia, B., & Hughes, J. E. (2009). Learning, teaching, and scholarship in a
digital age Web 2.0 and classroom research: What path should we take
now?. Educational researcher, 38(4), 246-259.
Guba, E.G. (1981).Criteria for assessing the trustworthiness of naturalistic inquiries.
Educational Communication and Technology Journal 29, 7591
Guru, C., &Drillon, D. (2009). Evaluating the effectiveness of an international eLearning
system: The case of Montpellier Business School. InProceedings of the International
Conference on e-Learning (pp. 174-181).
Guri-Rosenblit, S., &Gros, B. (2011). E-learning: Confusing terminology, research gaps and
inherent challenges. International Journal of E-Learning & Distance Education, 25(1).

164
Heppner, P., Wampold, B., Owen, J., Thompson, M., & Wang, K. (2015).Research design in
counseling. Cengage Learning.
Hillman, D. C., Willis, D. J., &Gunawardena, C. N. (1994). Learnerinterface interaction in
distance education: An extension of contemporary models and strategies for
practitioners. American Journal of Distance Education, 8(2), 30-42.
Hirumi, A. (2002). Student-centered, technology-rich learning environments (SCenTRLE):
Operationalizing constructivist approaches to teaching and learning. Journal of
Technology and Teacher Education, 10(4), 497-538.
Ivankova, N. V., Creswell, J. W., & Stick, S. L. (2006). Using mixed-methods sequential
explanatory design: From theory to practice. Field methods, 18(1), 3-20.
Johnson, R. B., &Onwuegbuzie, A. J. (2004). Mixed methods research: A research
paradigm whose time has come. Educational researcher, 33(7), 14-26.
Jonassen, D. H., & Grabowski, B. (1993). Individual differences and instruction. New York:
Allen & Bacon.
Jurado, F., Redondo, M. A., & Ortega, M. (2012). Blackboard architecture to integrate
components and agents in heterogeneous distributed eLearning systems: An application
for learning to program. Journal of Systems and Software, 85(7), 1621-1636.
Kim, K., & Bonk, C. J. (2006). The future of online teaching and learning in higher education:
The survey says. Educause quarterly, 29(4), 22.
Kimber, M., & Catherine Ehrich, L. (2011). The democratic deficit and school-based
management in Australia. Journal of Educational Administration, 49(2), 179-199.
Kirschner, P. A., &Paas, F. (2001). Web-enhanced higher education: a tower of
Babel. Computers in Human Behavior, 17(4), 347-353.
Koohang, A., & Du Plessis, J. (2004). Architecting usability properties in the e-learning
instructional design process. International Journal on ELearning,3(3), 38.
Kuiper, S., &Clippinger, D. A. (2012).Contemporary business report writing. Mason,
Ohio: South-Western
Laferrire, T., Montane, M., Gros, B., Alvarez, I., Bernaus, M., Breuleux, A., ...&Lamon, M.
(2010). Partnerships for knowledge building: An emerging model. Canadian Journal of
Learning and Technology/La revue canadienne de lapprentissageet de la
technologie, 36(1).
LaPointe, L., &Reisetter, M. (2008). Belonging online: Students' perceptions of the value and
efficacy of an online learning community. International Journal on E-Learning, 7(4),
641-665.
Lee, C., Potkonjak, M., &Mangione-Smith, W. H. (1997, December).MediaBench: a tool for
evaluating and synthesizing multimedia and communicatons systems. In Proceedings of
the 30th annual ACM/IEEE international symposium on Microarchitecture (pp. 330-
335). IEEE Computer Society.
Lincoln, Y. S. &Guba, E. H. (2000).Paradigmatic controversies, contradictions, and
emerging confluences, in N K Denzin& Y S Lincoln (eds.), Handbook of
qualitative research, 2nd ed, Sage, Thousand Oaks, CA.
Lincoln, Y. &Guba, E. (1985).Naturalistic inquiry. Beverly Hills, CA: Sage
Little, T. D. (2013).The Oxford handbook of quantitative methods. New York: Oxford
University Press.
Litwin, M. (1995).How to Measure Survey Reliability and Validity. Sage Publications
Lodico, M. G., Spaulding, D. T., &Voegtle, K. H. (2010).Methods in educational

165
research: From theory to practice. San Francisco, CA: Jossey-Bass
Lytras, M. D., Poiloudi, A., &Korfiatis, N. (2003). An Ontology Oriented Approach on
eLearning: Integrating Semantics for Adaptive eLearning Systems. ECIS 2003
Proceedings, 87.
MacDonald, C. J., Stodel, E. J., Farres, L. G., Breithaupt, K., & Gabriel, M. A. (2001). The
demand-driven learning model: A framework for web-based learning. The Internet and
Higher Education, 4(1), 9-30.
Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia
learning. Educational psychologist, 38(1), 43-52.
McDaniel, C. D., & Gates, R. H. (1998).Marketing research essentials. Cincinnati, Ohio:
South- Western College Pub.
Meredith, S., & Newton, B. (2004). Models of eLearning: Technology promise vs learner needs-
case studies. The International Journal of Management Education, 4(1).
Moore, David S. and George P. McCabe (2006), Introduction to the Practice of Statistics.
London: Routledge
Monsen, E. R., Van, H. L., & American Dietetic Association. (2008). Research:
Successful approaches. Chicago: American Dietetic Association
Mumtaz, S. (2000). Factors affecting teachers' use of information and communications
technology: a review of the literature. Journal of information technology for teacher
education, 9(3), 319-342.
Nayak, M. K., &Suesaowaluk, P. (2007). Advantages and disadvantages of elearning
management system. In Fourth International Conference on eLearning for Knowledge-
Based Society.School of Information Technology. Assumption University Bangkok,
Thailand.
Ogunleye, A. O. (2010). Evaluating An Online Learning Programme from Students'
Perspectives. Journal of College Teaching and Learning, 7(1), 79.
Oliver, R., & Herrington, J. (2003). Exploring technology-mediated learning from a pedagogical
perspective. Interactive Learning Environments, 11(2), 111-126.
Oliver, R., &Omari, A. (1999). Replacing lectures with on-line learning: Meeting the challenge.
In 16th Annual Conference of the Australasian Society for Computers in Learning in
Tertiary Education, Brisbane, Queensland University of Technology.
Owens, J., Hardcastle, L., & Richardson, B. (2009). Learning from a distance: The experience of
remote students. Journal of Distance Education (Online), 23(3), 53.
Ozkan, S., &Koseler, R. (2009). Multi-dimensional students evaluation of e-learning systems in
the higher education context: An empirical investigation.Computers& Education, 53(4),
1285-1296.
Paechter, M., Maier, B., &Macher, D. (2010). Students expectations of, and experiences in e-
learning: Their relation to learning achievements and course satisfaction. Computers &
education, 54(1), 222-229.
Papp, R. (2000). Critical success factors for distance learning. AMCIS 2000 Proceedings, 104.
Pavlov, R., &Paneva, D. (2005, October). Towards a Creative Exploitation of Digitised
Knowledge in eLearning Systems.In 2nd CHIRON Workshop, Paris, France (pp. 10-11).
Pimple, K. D., 2008.Research ethics.Aldershot, England: Ashgate.
Presser, S., Rothgeb, J., Couper, M., Lessler, J., Martin, E., Martin, J. & Singer, E.
(2004). Methods For Testing and Evaluating Survey Questionnaires. Wiley-
Interscience.

166
Rahm, D., & Reed, B. J. (1997). Going remote: The use of distance learning, the World Wide
Web, and the Internet in graduate programs of public affairs and administration. Public
Productivity & Management Review, 459-474.
Rajasingham, L. (2009). Breaking Boundaries: Quality E-Learning for Global Knowledge
Society. International Journal of Emerging Technologies in Learning, 4(1).
Redding, T. R., &Rotzien, J. (1999). Comparative analysis of SDL online training with
traditional classroom instruction. In 14th International Symposium on Self-Directed
Learning.
Reeves, T. C., Benson, L., Elliott, D., Grant, M., Holschuh, D., Kim, B., ...&Loh, S. (2002).
Usability and Instructional Design Heuristics for E-Learning Evaluation.
Rosenberg, H., Grad, H. A., &Matear, D. W. (2003). The effectiveness of computer-aided, self-
instructional programs in dental education: a systematic review of the literature. Journal
of dental education, 67(5), 524-532.
Sangr, A., Vlachopoulos, D., & Cabrera, N. (2012). Building an inclusive definition of e-
learning: An approach to the conceptual framework. The International Review of
Research in Open and Distributed Learning, 13(2), 145-159.
Sarwar, A., Ketavan, C., & Butt, N. S. (2015). Impact of eLearning Perception and eLearning
Advantages on eLearning for Stress Management (Mediating Role of eLearning for
Corporate Training). Pakistan Journal of Statistics and Operation Research, 11(2), 241-
258.
Saunders, M., Lewis, P. &Thornhill, A. (2006) Research Methods in Business.4thedn.
Essex: Pearson Education Limited.
Sclater, J., Sicoly, F., &Grenier, A. (2005). ETSB-CSLP laptop research partnership: SchoolNet
report: Preliminary study. Montreal, QC: Concordia University, Centre for the Study of
Learning and Performance.Retrieved October, 12, 2005.
Selim, H. M. (2007). Critical success factors for e-learning acceptance: Confirmatory factor
models. Computers & Education, 49(2), 396-413.
Serrano, C., & Alford, R. L. (2000). Virtual Languages: An innovative approach to teaching
EFL/ESL English as a foreign language on the World Wide Web. Teaching With
Technology: Rethinking Tradition. Less Lloyd Medford, NJ.: Information Today, Inc.
Shenton, A.K. (2004). Strategies for ensuring trustworthiness in qualitative research
projects. Education for Information, 22, 6375
Simon, M.D. (2011). Excerpted from Simon, M. K. (2011). Dissertation and scholarly
research: Recipes for success (2011 Ed.). Seattle, WA, Dissertation Success,
LLC. Available at
http://dissertationrecipes.com/wpcontent/uploads/2011/04/AssumptionslimitationsdelimitationsX
.pdf
Sluijsmans, D. M., Moerkerke, G., Van Merrienboer, J. J., &Dochy, F. J. (2001). Peer
assessment in problem based learning. Studies in educational evaluation, 27(2), 153-173.
Soe, K., Koki, S., & Chang, J. M. (2000). Effect of Computer-Assisted Instruction (CAI) on
Reading Achievement: A Meta-Analysis.
Steimle, J., Gurevych, I., &Mhlhuser, M. (2007).Notetaking in University Courses and its
Implications for eLearning Systems.In DeLFI (Vol. 5, pp. 45-56).
Strother, J. B. (2002). An assessment of the effectiveness of e-learning in corporate training
programs. The International Review of Research in Open and Distributed Learning, 3(1).
Surma, D. R., Geise, M. J., Lehman, J., Beasley, R., & Palmer, K. (2012). Computer literacy:

167
what it means and do today's college students need a formal course in it?. Journal of
Computing Sciences in Colleges, 28(1), 142-143.
Tarozzi, L &Luigina, M. (2010). phenomenology as philosophy of research: an
introductory Essay.
Available at http://www.zetabooks.com/download2/Tarozzi-Mortari_sample.pdf
Thurmond, V., &Wambach, K. (2004). Understanding interactions in distance education: A
review of the literature. International Journal of Instructional Technology and Distance
Learning, 1(1).
Torgerson, C. J., &Elbourne, D. (2002). A systematic review and metaanalysis of the
effectiveness of information and communication technology (ICT) on the teaching of
spelling. Journal of Research in Reading, 25(2), 129-143.
Trevio, L. K., & Weaver, G. R. (1999). The stakeholder research tradition: Converging
theoristsnot convergent theory. Academy of management review, 24(2), 222-
227.
Urquhart, C., Chambers, M., Connor, S., Lewis, L., Murphy, J., Roberts, R., & Thomas, R.
(2002). Evaluation of distance learning delivery of health information management and
health informatics programmes: a UK perspective. Health Information & Libraries
Journal, 19(3), 146-157.
Van Dijk, J. A. (2006). Digital divide research, achievements and shortcomings. Poetics, 34(4),
221-235.
Vithal, R., & Jansen, J. (1997).Designing your first research proposal: A manual for
researchers in education and the social sciences. Kenwyn: Juta.

Webster, J., &Hackley, P. (1997). Teaching effectiveness in technology-mediated distance


learning. Academy of management journal, 40(6), 1282-1309.
Wegner, S. B., Holloway, K. C., &Garton, E. M. (1999). The effects of Internet-based
instruction on student learning. Journal of Asynchronous Learning Networks, 3(2), 98-
106.
Whelan, R., &Plass, J. (2002). Is eLearning effective? A review of Literature from 1993-2001.
In Proceedings of World Conference on e-learning in Corp., Health, & Higher Ed (pp.
1026-1028).
Willcoxson, L., & Prosser, M. (1996). Kolb's Learning Style Inventory (1985): Review and
further study of validity and reliability. British Journal of Educational Psychology, 66(2),
247-257.
Wilson, G., & Stacey, E. (2004). Online interaction impacts on learning: Teaching the teachers to
teach online. Australasian journal of educational technology, 20(1), 33-48.
Zygouris-Coe, V., Swan, B., & Ireland, J. (2009). Online learning and quality
assurance. International Journal on E-learning, 8(1), 127-146.

168

You might also like