You are on page 1of 14

Total Quality Management

Vol. 20, No. 5, May 2009, 523 535






Students perceptions of service quality in higher education

Halil Nadiri
a
, Jay Kandampully
b

and Kashif Hussain
c


a
Department of Business Administration, Eastern Mediterranean University, PO Box 95
Gazimag usa, North Cyprus, Via Mersin-10, Turkey;
b
Department of Consumer Sciences, The
Ohio State University, USA;
c
Institute of Educational Sciences, Near East University, Lefkosa,
North Cyprus, Via Mersin-10, Turkey

This study aims to diagnose the applicability of the perceived service quality
measurement scale to students; and to diagnose the student satisfaction level in
higher education. It attempts to diagnose the perceived service quality of
administrative units such as services provided by the registrar, library, faculty/
school offices, rector office, dormitories, sports and health centre. Descriptive and
causal analysis is employed. Reliability and dimensionality of the scale is tested.
Results indicate that the nature of perceived service quality measurement instrument
is found to be two-dimensional: tangibles and intangibles for higher education
services. The results and implications are discussed in detail.

Keywords: perceived service quality; student satisfaction; higher education



Introduction
Higher education is a fast growing service industry and every day it is more and more
exposed to globalisation processes (Damme, 2001; ONeil & Palmer, 2004). Service
quality, emphasising student satisfaction, is a newly emerging field of concern. In order
to attract students, serve their needs and retain them, higher education providers are actively
involved in understanding students expectations and perceptions of service quality. They
often need to adapt techniques of measuring the quality of their services just like in the
business sector. Most conceptual frameworks for measuring service quality are based on
marketing concepts (Gummesson, 1991). These frameworks measure quality through
customer perceptions (Gro nroos, 1984), customer expectations having a substantial
influence on these perceptions. It is argued that only criteria defined by customers count
in measuring quality (Zeithaml et al., 1990).
According to Hennig-Thurau et al. (2001, p. 332), educational services fall into the
field of services marketing. Owing to the unique characteristics of services, namely
intangibility, heterogeneity, inseparability and perishability (Parasuraman, 1986),
service quality cannot be measured objectively (Patterson & Johnson, 1993). In the
services literature, the focus is on perceived quality, which results from the comparison
of customer service expectations with their perceptions of actual performance (Zeithaml
et al., 1990).
During the last decade, quality initiatives have been the subject of an enormous amount
of practitioner and academic discourse, and at various levels have found a gateway into




Corresponding author. Email: kandampully.1@osu.edu

ISSN 1478-3363 print/ISSN 1478-3371 online
# 2009 Taylor & Francis
DOI: 10.1080/14783360902863713
http://www.informaworld.com
524 H. Nadiri et al. Total Quality Management 524



higher education (Avdjieva & Wilson, 2002). Student satisfaction is often used to assess
educational quality, where the ability to address strategic needs is of prime importance
(Cheng, 1990). The conceptualisation of service quality, its relationship to the satisfaction
and value constructs and methods of evaluation, have been a central theme of the
education sector over recent years (Oldfield & Baron, 2000; Soutar & McNeil, 1996).
Measuring the quality of service in higher education is increasingly important (Abdullah,
2006).
In general, service quality promotes customer satisfaction, stimulates intention
to return, and encourages recommendations (Nadiri & Hussain, 2005). Customer
satisfaction increases profitability, market share and return on investment (Barsky &
Labagh, 1992; Fornell, 1992; Hackl & Westlund, 2000; Halstead & Page, 1992;
LeBlanc, 1992; Legoherel, 1998; Stevens et al., 1995). The higher education sector
should recognise the importance of service improvements in establishing a competitive
advantage.
The importance of quality in the service industry has attracted many researchers to
empirically examine service quality within a wide array of service settings such as
appliance repair, banking, hotels, insurance, long-distance telephone (Parasuraman et al.,
1985; Zeithaml et al., 1990). Today, controversy continues concerning how service
quality should be measured (Cronin & Taylor, 1992, 1994; Parasuraman et al., 1988,
1991, 1994). One of the most controversial issues is the reliability of SERVQUAL: a
scale developed to measure service quality by Parasuraman et al. (1985). SERVQUAL
has been used to measure service quality in business schools (Carman, 1990), banking,
dry cleaning, fast food services (Cronin & Taylor, 1992) and in many other institutions.
Carman (1990) analysed the five dimensions of SERVQUAL by adding attributes that
are pertinent to different situations, such as the fact that the failure rate is higher for colleges
and universities than for either business or government organisations (Cameron &
Tschirhart, 1992). In measuring service quality in higher education, it is important to
study the meaning of service quality that relates to the situation under study. In service
literatures, analyses of the practical basis of service quality measurement have been con-
ducted on the definitions of quality in higher education (Lagrosen et al., 2004), service
quality dimensions (Joseph & Joseph, 1997; Lagrosen et al., 2004; Owlia & Aspinwall,
1996), perceived importance (Ford et al., 1999), service quality and customer satisfaction
(Rowley, 1997). The intention of this study is to provide a practical basis for service quality
measurement in the area of higher education of the island of Cyprus, especially for North
Cyprus.
Harvey (2003, p. 4) notes that it is not always clear how views collected from students
fit into institutional quality improvement policies and processes. Moreover establishing
the conditions under which student feedback can give rise to improvement is not an
easy task. Indeed, Ford et al. (1993) have pointed out that SERVQUAL might assess
students perceptions as to the quality of their educational institutions, but not the edu-
cation itself. According to Oldfield and Baron (2000), student perceptions of service
quality in higher education, particularly of the elements not directly involved with
content and delivery of course units, are researched using a performance-only adaptation
of the SERVQUAL research instrument. Therefore, this study attempts to approach
service quality of administrative units in general rather than academic issues, e.g. services
provided by the registrar, library, faculty/school offices, rector office, dormitories, sports,
health centre, etc. rather than teaching, course content or curriculum. Service quality
measurement in this study contributes to overall quality of the higher education institutes.
Thus, the purpose of the study is twofold: to diagnose the applicability of the perceived
525 H. Nadiri et al. Total Quality Management 525



service quality measurement scale to students and to diagnose the student satisfaction level
in higher education.


Background of the study
If service quality is to be improved, it must be reliably assessed and measured. According
to the SERVQUAL model (Parasuraman et al., 1988), service quality can be measured by
identifying the gaps between customers expectations of the service to be rendered and
their perceptions of the actual performance of the service.
Parasuraman et al. (1988, p. 15) defined service quality as a global judgment or
attitude relating to the overall excellence or superiority of the service and they conceptu-
alised a customers evaluation of overall service quality by applying Olivers (1980)
disconfirmation model, as the gap between expectations and perception (gap model) of
service performance levels. Furthermore, they propose that overall service quality per-
formance could be determined by a measurement scale called SERVQUAL that uses
five generic dimensions:

(1) Tangibles the physical surroundings represented by objects (for example,
interior design) and subjects (for example, the appearance of employees);
(2) Reliability the service providers ability to provide accurate and dependable
services;
(3) Responsiveness a firms willingness to assist its customers by providing fast and
efficient service performance;
(4) Assurance diverse features that provide confidence to customers (such as the
firms specific service knowledge; polite and trustworthy behaviour from employ-
ees); and
(5) Empathy the service firms readiness to provide each customer with personal
service.

The development of the gap model by Parasuraman et al. (1985) opened new horizons
to the understanding of service quality. Moreover, the measurement of the gap between
customers expectation of service and perception of service received (gap 5) led to a fre-
quently used and highly debated service quality instrument called the SERVQUAL scale.
Parasuraman et al. (1985) argued that the most important gap is between customers
expectations of service and their perception of the service actually delivered (gap 5).
The other four gaps (1, 2, 3 and 4) are the major causes of gap 5. Thus, firms should try
to close or narrow down the other four gaps first in order to manage gap 5.
The original SERVQUAL scale was composed of two sections. The first section
contains 22 items for customer expectations of excellent firms in the specific service indus-
try. The second contains 22 items, which measure consumer perceptions of service
performance of a company being evaluated. The results from the two sections are then
compared and used to determine the level of service quality. The SERVQUAL instrument
has been widely used to measure service quality in various service industries. However,
despite its popularity, it has received its share of criticism since its development. A con-
siderable number of criticisms focused on the use of expectation as a comparison standard
(e.g. Cronin & Taylor, 1994; Teas, 1994).
According to Parasuraman et al. (1985, 1991) the concept of expectation has been
emphasised as a key variable in the evaluation of service quality. However, Teas (1994)
points out that some validity problems arise when customer expectation is used as a
comparison standard. For example, expectation is dynamic in nature and may change
526 H. Nadiri et al. Total Quality Management 526



according to customers experiences and consumption situations. Boulding et al. (1993)
reject the use of expectation as a comparison standard for the measurement of service
quality and recommend performance-only measurement.
The negative empirical findings concerning the measurement of expectations led to
some doubt about its value. Some scholars maintain that measurement of expectations
does not provide unique information for estimating service quality; they argue that
performance-only assessment has already taken into account much of this information
(Babakus & Boller, 1992; Cronin & Taylor, 1992). In general, previous studies would
recommend that performance-only measurement is sufficient.
Thus SERVQUAL has not been without criticisms. Particular research efforts by Cronin
and Taylor (1992) cast doubts about the validity of the disconfirmation paradigm advocated
by Parasuraman et al. (1985, 1988). These authors questioned whether or not customers
routinely assess service quality in terms of expectations and perceptions. They advance
the notion that service quality is directly influenced only by perceptions of service perform-
ance. Accordingly, they developed an instrument of service performance (SERVPERF) that
seems to produce better results than SERVQUAL (Asubonteng et al., 1996).
Another major criticism of the SERVQUAL scale, reported in the literature, is about
its dimensionality problem. Several researchers (Carman, 1990; Cronin & Taylor, 1992;
Parasuraman et al., 1985, 1988, 1991; Teas, 1994) argue that the number of dimensions
and the nature of the SERVQUAL construct may be industry-specific. The fit of five
dimensions of SERVQUAL carried out in different service activities has always been
an important question in several studies that these dimensions proposed in SERVQUAL
do not replicate. In many studies, the SERVQUAL scale has been found uni-dimensional
(Angur et al., 1999; Babakus & Boller, 1992; Babakus & Mangold, 1992), sometimes
two-dimensional (Ekinci et al., 2003; Karatepe & Avci, 2002) and sometimes with even
10 dimensions (Carman, 1990). It has also been argued that a performance-only
(SERVPERF) measure explains more of the variance in an overall measure of service
quality than the SERVQUAL instrument (Cronin & Taylor, 1994).
Apart from the debate among the above researchers for the merits of SERVQUAL over
SERVPERF and vice versa, it seems, however, that on balance the emerging literature
supports the performance-based paradigm over the disconfirmation-based paradigm
(Cronin & Taylor, 1994). This research builds on these conclusions and adapts the
performance-based SERVPERF paradigm.


Methodology
The sample of the study consists of students studying at Eastern Mediterranean University
(EMU) located in Famagusta, North Cyprus. Students were selected according to non-
probability convenience sampling method (Aaker et al., 1995). The management of the
university was informed about the purpose of the study and after permission wad gained,
600 questionnaires were distributed to students. Of these, 522 questionnaires were returned.
In all, 492 questionnaires were found to be useful, which represents an 82% response rate
from the original sample of 600. The survey was conducted in April 2007.
The questionnaire was based on only service perceptions. There were 24 items in all
22 items for measuring service perception of perceived service quality (adapted from
Parasuraman et al., 1991), and two items for measuring student satisfaction. A pilot test
study was conducted with 50 students. As a result of the pilot study, the instrument was
reworded for measuring perceived service quality for higher education. A five-point
Likert scale (Likert et al., 1934) was used for data collection with 1 being strongly
527 H. Nadiri et al. Total Quality Management 527



disagree and 5 being strongly agree. Research shows that self-reported performance may
lead to response bias. However, a meta-analytic review by Churchill et al. (1985) demon-
strates that self-report measures do not necessarily lead to response bias. In addition, the
survey was prepared according to the back-translation method (McGorry, 2000).
SPSS 10.0 for Windows was employed in order to access the particular results required
for the scale measurement. Descriptive analyses of means, standard deviation and frequen-
cies were conducted. Reliability of the scale was tested; dimensionality of the scale was
confirmed through an exploratory factor analysis; correlation analysis produced discrimi-
nant validity and regression analysis produced causal results.


Findings
Dimensions of SERVPERF
The results of exploratory factor analysis demonstrated that the SERVPERF instrument
failed to form its five assumed dimensions tangibles, reliability, responsiveness, assur-
ance and empathy. The results formed only two dimensions tangibles and intangibles.
This is discussed further below.


The sample
Demographic breakdown of the sample in Table 1 shows that 56.1% of the respondents
were males. As for the age distribution, the majority of respondents fall between the
age group of 21 25 (76.8%). With respect to the educational programme of study and
faculties/schools of respondents, 82.3% of them were enrolled for undergraduate pro-
grammes, 35.4% of them were from the faculty of engineering and 40.9% of respondents
were second year students. About 25.6% of respondents had CGPA (Cumulative Grade-
Point Average) 2.5 2.99. In terms of nationality, 41.5% respondents were Turkish
Cypriots and 58.5% were from European, Asian and African countries (including
Britain, Germany, Albania, Bosnia, Kosovo, Palestine, Turkey, Sudan, Jordan, Saudi
Arabia, Oman, Pakistan, India, Bangladesh, Nigeria and Ghana).


Reliability and dimensionality of the scale
The results in Table 2 proved that the overall reliability of the scale had an alpha coeffi-
cient of 0.96 which is deemed acceptable (Churchill, 1979; Nunnally, 1978). Explora-
tory factor analysis using varimax rotation was employed to explore the dimensionality in
the dataset. The two factors tangibles and intangibles had cumulative variance
63.93%, and all the factor loadings were found to be greater than 0.50 (Hair et al.,
1979) which demonstrates two distinct dimensions in the study. The Cronbach alphas
for tangibles and intangibles were found to be 0.96 and 0.85 respectively at the aggregate
level which exceeds the minimum standard 0.70 (Churchill, 1979; Nunnally, 1978). It
was necessary to measure the reliability and dimensionality of the scale to avoid criticism.


Distribution of SERVPERF values of students
Table 3 demonstrates that students have relatively high perception scores (mean 4.20)
related to EMUs neat-appearing employees (4.20), safe transactions (4.27) and con-
venient operating hours (4.29). However, the minimum perception scores (mean 4.00)
were about EMUs modern looking equipment (3.98), materials associated with service
528 H. Nadiri et al. Total Quality Management 528



Table 1. Demographic breakdown of the sample (n 492).


Frequency (F) Percentage (%)
Gender

Female 216 43.9
Male 276 56.1
Total 492 100.0
Age

20 and below 54 11.0
21 25 378 76.8
26 30 51 10.4
31 and above 9 1.8
Total 492 100.0
Programme of study
Preparatory school 21 4.3
Undergraduate 405 82.3
Graduate (Masters/doctorate) 66 13.4
Total 492 100.0
Faculty of study
Faculty of architecture 24 4.9
Faculty of arts and science 9 1.8
Faculty of business and economics 69 14.0
Faculty of communication and media studies 33 6.7
Faculty of education 48 9.8
Faculty of engineering 174 35.4
Faculty of law 30 6.1
School of foreign languages 21 4.3
School of tourism and hospitality management 57 11.6
School of computing and technology 27 5.5
Total 492 100.0
Educational year
1st year 72 14.6
2nd year 201 40.9
3rd year 132 26.8
4th year 87 17.7
Total 492 100.0
CGPA

No credits earned 24 4.9
1.99 or below 33 6.7
2.0 2.49 117 23.8
2.5 2.99 126 25.6
3.0 3.49 123 25.0
3.5 or above 69 14.0
Total 492 100.0
Nationality
Turkish Cypriot 204 41.5
Others (from European, Asian and African countries) 288 58.5
Total 492 100.0


(3.99) and personal attention given by employees (3.98), which means that EMU fails to
maintain their modern looking equipment and materials associated with service, also
employees of EMU need to be well trained to provide minimum satisfactory services.
Overall results reveal that students are happy (4.16%) and satisfied (4.27%) with the
services provided by EMU.
529 H. Nadiri et al. Total Quality Management 529



Table 2. Results of exploratory factor analysis.

Dimensions and % of Cumulative variance
items Eigenvalue variance %
Factor
loading
Cronbach
alpha
Intangibles 12.59 57.22 57.22

a 0.96
When you have a problem, EMU shows a sincere interest in solving it. 0.79

Employees of EMU give you prompt service. 0.79

EMU performs the service right the first time. 0.78

Employees of EMU are never too busy to respond to your requests. 0.78

Employees of EMU tell you exactly when services will be performed. 0.77

Employees of EMU are always willing to help you. 0.77

EMU insists on error-free records. 0.76

EMU provides its services at the time it promises to do so. 0.73

You feel safe in your transactions with EMU. 0.73

When EMU promises to do something by a certain time, it does so. 0.72

Employees of EMU have the knowledge to answer your questions. 0.72

EMU has operating hours convenient to all its students. 0.72

EMU gives you individual attention. 0.70

Employees of EMU are consistently courteous with you. 0.70

The behaviour of employees of EMU instils confidence in students. 0.67

Employees of EMU understand your specific needs. 0.67

EMU has employees who give you personal attention. 0.66

EMU has your best interest at heart. 0.60

Tangibles 1.48 6.71 63.93 a 0.85
EMUs physical facilities are visually appealing. 0.88

EMU has modern looking equipment. 0.84

EMUs employees are neat in appearance. 0.71

Materials associated with the service are visually appealing at EMU. 0.65

Notes:

Kaiser-Meyer-Olkin measures of sampling adequacy: 0.94.

Bartletts test of sphericity: 9191.87, p , 0.000.

Principal component analyses with a varimax rotation.

Overall reliability score: 0.96.



Correlations of the study variables
In the present study correlation analysis was employed since correlation analysis involves
measuring the closeness of the relationship between two or more variables; it considers the
joint variation of two measures (Churchill, 1995, p. 887). In Table 4, the results of cor-
relation analysis are significant at the 0.01 level. When the correlation coefficients
matrix between two constructs is examined, no correlation coefficient is equal to 0.90
or above. This examination provides support for the discriminant validity about this
study, which means that all the constructs are different/distinct (Amick & Walberg,
1975). It can be seen in Table 4 that all the means for each construct are in between
2.00 and 4.25, which refers to quite low social desirability effect (response bias). It
means that the respondents understood the question very well and avoided marking a
favourable response.


Regression analysis
Regression analysis is the technique used to derive an equation that relates the criterion
variables to one or more predictor variables; it considers the frequency distribution of
the criterion variable, when one or more predictor variables are held fixed at various
530 H. Nadiri et al. Total Quality Management 530


Table 3. Distribution of SERVPERF values of students.


Tangibles


Perceptions mean (SD)
EMU has modern looking equipment. 3.98 (0.77)
EMUs physical facilities are visually appealing. 4.02 (0.81)
EMUs employees are neat in appearance 4.20 (0.87)
Materials associated with the service are visually appealing at EMU. 3.99 (0.87)
Intangibles
When EMU promises to do something by a certain time, it does so. 4.14 (0.84)
When you have a problem, EMU shows a sincere interest in solving it. 4.19 (0.86)
EMU performs the service right the first time. 4.12 (0.88)
EMU provides its services at the time it promises to do so. 4.18 (0.82)
EMU insists on error-free records. 4.17 (0.84)
Employees of EMU tell you exactly when services will be performed. 4.16 (0.92)
Employees of EMU give you prompt service. 4.16 (0.90)
Employees of EMU are always willing to help you. 4.16 (0.91)
Employees of EMU are never too busy to respond to your requests. 4.11 (0.93)
The behaviour of employees of EMU instils confidence in students. 4.12 (0.86)
You feel safe in your transactions with EMU. 4.27 (0.79)
Employees of EMU are consistently courteous with you. 4.18 (0.87)
Employees of EMU have the knowledge to answer your questions. 4.16 (0.82)
EMU gives you individual attention. 4.18 (0.90)
EMU has operating hours convenient to all its students. 4.29 (0.82)
EMU has employees who give you personal attention. 3.98 (0.81)
EMU has your best interest at heart. 4.10 (0.82)
Employees of EMU understand your specific needs. 4.13 (0.85)
Student satisfaction
I am happy with the service quality of EMU. 4.16 (1.01)
I am a satisfied student. 4.27 (0.94)

Note: SD: standard deviation; all the standard deviations are in parentheses.


levels (Churchill, 1995, p. 887). Table 5 shows that the regression analysis was used
having student satisfaction as the dependent variable and tangibles and intangibles
as the independent variables. It was necessary to use the regression analysis to predict
the student satisfaction level of EMU students and the obtained results showed that
there was a positive correlation with a R
2
of 0.64 and a F value of 434.51 at a signifi-
cance level of p , 0.001. Tangibles (b 0.20) and intangibles (b 0.97) both exert
significant positive effect on student satisfaction. Moreover, tangibles and intangibles

Table 4. Correlations of the study variables.


Variables

Scale Tangibles Intangibles Student satisfaction
Tangibles 1.00

Intangibles 0.73

1.00

Student satisfaction 0.65

0.79

1.00
Mean 4.04 4.15 4.22
Standard deviation 0.69 0.66 0.93
Note:

All correlations are significant at the 0.01 level (2-tailed).
531 H. Nadiri et al. Total Quality Management 531



Table 5. Regression analysis.

Independent variable: Tangibles and intangibles
Dependent variable: Student satisfaction

Independent variable b
a
t-value p
b

Tangibles 0.20 3.69 0.001
Intangibles 0.97 17.45 0.001
R
2
0.64
F 434.51
p , 0.001

Note:
a
Standardised coefficient;
b
p , 0.05.



jointly explain 64% of the variance (R
2
) in the student satisfaction which is very good.
Overall, the results indicate that tangibles and intangibles are predictors of student
satisfaction.


Discussion and implications
In general, SERVQUAL is a measurement of service quality based on the difference
between the customers expectations of the quality of service he/she will receive,
and his or her perceptions of the service received. SERVPERF, in contrast, is a
performance-only measurement of service quality (Zhou, 2004).
This study aimed to diagnose the applicability of the perceived service quality
measurement scale to students; and to determine the student satisfaction level in higher
education. The findings of this study reveal that the SERVPERF scale successfully
maintains its reliability. Hence, students evaluation of perceived service quality in
higher education consists of two dimensions: tangibles and intangibles. Such results of
this study are also supported with previous empirical studies in literature (Ekinci et al.,
2003; Karatepe & Avci, 2002).
This study attempted to diagnose the perceived service quality of administrative units
such as services provided by the registrar, library, faculty/school offices, rector office,
dormitories, sports and health centre. The findings of this study are important for prac-
titioners in the higher education sector.


Management implications
The results of this study have a number of practical implications for the higher education
sector where authorities seek to identify the student satisfaction level in their particular
institutes:

(1) First, the findings of this study are important for higher education authorities who
should note that students are likely to become more demanding in terms of the
level of service they consider to be satisfactory. It is obvious from the results
that tangibles and intangibles are predictors of student satisfaction.
(2) Secondly, authorities should pay attention to the physical facilities of the university
if they are to improve the quality of services for higher education. Students expect
universities to have modern looking equipment and appealing materials associated
with the service such as brochures, pamphlets, etc. Authorities should take into
532 H. Nadiri et al. Total Quality Management 532



account the inanimate service environment so as to enhance perceived service
quality and achieve student satisfaction.
(3) Finally, authorities should ensure that employees are well trained and understand
the level of service that the university expects to provide for its students.
Employees should be able to show adequate personal attention to students.
Ensuring that employees are well trained, and giving attention to other factors
that are required for the provision of a high level of service quality might
incur increased costs, but will provide improved student satisfaction. Thus, auth-
orities are expected to allocate more financial resources for the human resource
applications, revealing that recruiting and selecting the most suitable candidates
for the vacant posts and training staff permanently that will result in qualified
personnel being able to provide students with caring, individualised attention
and convenient operating hours. The allocation of financial resources for the
human resource applications will equip employees with a better understanding
of service excellence.


Limitations and avenues for future research
This research has certain limitations, and interpretation of its findings therefore needs to be
undertaken with caution:

(1) First, the sample in this study is small and is limited to students studying at Eastern
Mediterranean University. There are in total six universities in North Cyprus;
other universities should also be included in the sample for further research on
service quality in higher education.
(2) Second, many of the issues in service quality literature remain to be explored for
example, how marketing strategies can be designed to manage perceived service
quality and how the higher education sector can use the service quality concept
to formulate marketing strategies effectively.


Conclusion
This study also provides higher education service quality researchers with useful guide-
lines for future research that would result in more rigorous theoretical and methodological
processes. The terms student satisfaction and quality have been central to higher edu-
cation authorities philosophy, and their importance continues with the promise of a
renewed, foreseeable prosperity for the higher education of the future. Nevertheless,
higher education research has not, on the whole, developed any substantive theories and
innovations. Partial responsibility for this lies in the method-driven research traditions
of the past. Therefore, using the SERVPERF scale, one of the apparent implications of
this study turns out to be that higher education authorities should improve their service
level and should update the structure of their available physical facilities. Also, the use
of the SERVPERF scale to measure the service quality provides diagnostic capability
about the level of service performance from the students perspective. Thus, the use of
SERVPERF provides useful information to higher education authorities for developing
quality improvement strategies. This study also supports the argument in the literature
that performance-only (SERVPERF) is the better predictor of service quality (Babakus
& Boller, 1992; Boulding et al., 1993; Cronin & Taylor, 1992). In general, this study
also recommends that the SERVPERF measurement is sufficient.
Total Quality Management 533 533 H. Nadiri et al.



References
Aaker, D.A., Kumar, V., & Day, G.S. (1995). Marketing research (5th ed.). New York: John Wiley.
Abdullah, F. (2006). Measuring service quality in higher education: Three instruments compared.
International Journal of Research & Method in Education, 29(1), 7189.
Amick, D.J., & Walberg, H.J. (1975). Introductory multivariate analysis. Berkeley, CA: McCutchan.
Angur, M.G., Nataraajan, R., & Jahera, J.S., Jr. (1999). Service quality in the banking industry: An
assessment in a developing economy. Journal of Services Marketing, 13(2), 132150.
Asubonteng, P., McCleary, K.J., & Swan, J.E. (1996). SERVQUAL revisited: A critical review of
service quality. Journal of Services Marketing, 10(6), 6281.
Avdjieva, M., & Wilson, M. (2002). Exploring the development of quality in higher education.
Managing Service Quality, 12(6), 372383.
Babakus, E., & Boller, G.W. (1992). An empirical assessment of SERVQUAL scale. Journal of
Business Research, 24(3), 253268.
Babakus, E., & Mangold, G.W. (1992). Adapting the SERVQUAL scale to hospital services: An
empirical investigation. Health Services Research, 26(6), 767786.
Barsky, J.D., & Labagh, R. (1992). A strategy for customer satisfaction. Cornell Hotel and
Restaurant Administration Quarterly, 35(3), 3240.
Boulding, W., Kalra, A., Staelin, R., & Zeithaml, V.A. (1993). A dynamic process model of service
quality: From expectations to behavioral intentions. Journal of Marketing Research, 30(1),
727.
Cameron, K.S., & Tschirhart, M. (1992). Postindustrial environments and organizational effective-
ness in colleges and universities. Journal of Higher Education, 63(1), 87108.
Carman, J.M. (1990). Consumer perceptions of service quality: An assessment of the SERVQUAL
dimensions. Journal of Retailing, 66(1), 3335.
Cheng, Y.C. (1990). Conception of school effectiveness and models of school evaluation: A dynamic
perspective. Education Journal, 18(1), 4762.
Churchill, A.G. (1979). A paradigm for developing better measures of marketing constructs. Journal
of Marketing Research, 16(1), 6473.
Churchill, A.G. (1995). Marketing research: Methodological foundations (6th ed.). New York:
Dryden Press.
Churchill, G.A., Jr., Ford, N.M., Hartley, S.W., & Walker, O.C., Jr. (1985). The determinants of
salesperson performance: A meta-analysis. Journal of Marketing Research, 22(2), 103118.
Cronin, J.J., & Taylor, S.A. (1992). Measuring service quality: A reexamination and extension.
Journal of Marketing, 56(3), 5568.
Cronin, J.J., & Taylor, S.A. (1994). SERVPERF versus SERVQUAL: Reconciling performance-
based and perceptions-minus-expectations measurement of service quality. Journal of
Marketing, 58(1), 125131.
Damme, D. (2001). Quality issues in the internationalization of higher education. Higher Education,
41(4), 415441.
Ekinci, Y., Prokopaki, P., & Cobanoglu, C. (2003). Service quality in Cretan accommodations:
Marketing strategies for the UK holiday market. International Journal of Hospitality
Management, 22(1), 4766.
Ford, J.B., Joseph, M., & Joseph, B. (1993). Service quality in higher education: A comparison of
universities in the United States and New Zealand using SERVQUAL. In Enhancing knowledge
development in marketing: Proceedings of the American Marketing Association Annual
Summer Educators Conference (pp. 75 81). Boston, MA: American Marketing Association.
Ford, J.B., Joseph, M., & Joseph, B. (1999). Importance-performance analysis as a strategic tool for
service marketers: The case of service quality perceptions of business students in New
Zealand and the USA. Journal of Services Marketing, 13(2), 171186.
Fornell, C. (1992). A national customer satisfaction barometer: The Swedish experience. Journal of
Marketing, 56(1), 621.
Gro nroos, C. (1984). A service quality model and its marketing implications. European Journal of
Marketing, 18(4), 3644.
Gummesson, E. (1991). Truths and myths in service quality. Journal for Quality and Participation,
14(4), 2833.
Hackl, P., & Westlund, A.H. (2000). On structural equation modelling for customer satisfaction
measurement. Total Quality Management, 11(4/5/6), 820825.
Total Quality Management 534 534 H. Nadiri et al.



Hair, J.F., Anderson, R.E., Tatham, R.L., & Grablowsky, B.J. (1979). Multivariate data analysis
with readings. Tulsa, OK: Petroleum Publishing Company.
Halstead, D., & Page, T.J., Jr. (1992). The effects of satisfaction and complaining behaviour on
consumers repurchase behaviour. Journal of Satisfaction, Dissatisfaction and Complaining
Behaviour, 5(1), 111.
Harvey, L. (2003). Editorial: Student feedback. Quality in Higher Education, 9(1), 320.
Hennig-Thurau, T., Langer, M.F., & Hansen, U. (2001). Modeling and managing student loyalty: An
approach based on the concept of relationship quality. Journal of Service Research, 3(4),
331344.
Joseph, M., & Joseph, B. (1997). Service quality in education: A student perspective. Quality
Assurance in Education, 5(1), 1521.
Karatepe, O.M., & Avci, T. (2002). Measuring service quality in the hotel industry: Evidence from
northern Cyprus. Anatolia: An International Journal of Tourism and Hospitality Research,
13(1), 1932.
Lagrosen, S., Sayyed-Hashemi, R., & Leitner, M. (2004). Examination of the dimensions of quality
in higher education. Quality Assurances in Education, 12(2), 6169.
LeBlanc, G. (1992). Factors affecting customer evaluation of service quality in travel agencies: An
investigation of customer perceptions. Journal of Travel Research, 30(4), 1016.
Legoherel, P. (1998). Quality of tourist services: The influence of each participating component on
the consumers overall satisfaction regarding tourist services during a holiday. In Proceedings
of the Third International Conference on Tourism and Hotel Industry in Indo-China and
Southeast Asia: Development, Marketing and Sustainability (pp. 4754). Phuket: National
Publishing Inc.
Likert, R., Roslow, S., & Murphy, G.A. (1934). A simple and reliable method of scoring the
turnstone attitude scales. Journal of Social Psychology, 5(1), 228238.
McGorry, S.Y. (2000). Measurement in a cross-cultural environment: Survey translation issues.
Qualitative Market Research: An International Journal, 3(2), 7481.
Nadiri, H., & Hussain, K. (2005). Perceptions of service quality in North Cyprus hotels.
International Journal of Contemporary Hospitality Management, 17(6), 469480.
Nunnally, J.C. (1978). Psychometric theory. New York: McGraw-Hill.
Oldfield, B.M., & Baron, S. (2000). Student perceptions of service quality in a UK university
business and management faculty. Quality Assurance in Education, 8(2), 8595.
Oliver, R.L. (1980). A cognitive model of the antecedents and consequences of satisfaction
decisions. Journal of Marketing Research, 17(4), 460469.
ONeil, M.A., & Palmer, A. (2004). Importance-performance analysis: A useful tool for directing
continuous quality improvement in higher education. Quality Assurance in Education,
12(1), 3952.
Owlia, M.S., & Aspinwall, E.M. (1996). A framework for the dimensions of quality in higher edu-
cation. Quality Assurance in Education, 4(2), 1220.
Parasuraman, A. (1986). Customer-orientated organizational culture: A key to successful services
marketing. In M. Venkatesan, D.M. Schmalensee, & C. Marshall (Eds.), Creativity in services
marketing: Whats new, what works, whats developing (pp. 7377). Chicago: American
Marketing Association.
Parasuraman, A., Berry, L.L., & Zeithaml, V.A. (1991). Refinement and reassessment of the
SERVQUAL scale. Journal of Retailing, 67(4), 420450.
Parasuraman, A., Zeithaml, V.A., & Berry, L.L. (1985). A conceptual model of service quality and
its implications for future research. Journal of Marketing, 49(4), 4150.
Parasuraman, A., Zeithaml, V.A., & Berry, L.L. (1988). SERVQUAL: A multiple-item scale for
measuring consumer perceptions of service quality. Journal of Retailing, 64(1), 1240.
Parasuraman, A., Zeithaml, V.A., & Berry, L.L. (1994). Alternative scales for measuring service
quality: A comparative assessment based on psychometric and diagnostic criteria. Journal
of Retailing, 70(41), 201203.
Patterson, P.G., & Johnson, L.W. (1993). Disconfirmation of expectations and the gap model of
service quality: An integrated paradigm. Journal of Satisfaction, Dissatisfaction and
Complaining Behavior, 6(1), 9099.
Rowley, J. (1997). Beyond service quality dimensions in higher education and towards a service
contracts. Quality Assurance in Education, 5(1), 714.
Total Quality Management 535 535 H. Nadiri et al.



Soutar, G., & McNeil, M. (1996). Measuring service quality in a tertiary institution. Journal of
Educational Administration, 34(1), 7782.
Stevens, P., Knutson, B., & Patton, M. (1995). DINESERV: A tool for measuring service quality in
restaurants. The Cornell Hotel and Restaurant Administration Quarterly, 36(2), 5660.
Teas, K.R. (1994). Expectations as a comparison standard in measuring service quality: An assess-
ment of a reassessment. Journal of Marketing, 58(1), 132139.
Zeithaml, V., Parasuraman, A., & Berry, L. (1990). Delivering quality service: Balancing customer
perceptions and expectations. New York: The Free Press.
Zhou, L. (2004). A dimension-specific analysis of performance only measurement of service quality
and satisfaction in Chinas retail banking. Journal of Services Marketing, 18(7), 534546

You might also like