You are on page 1of 24

Questioning

Questioning the value of the the value of


research selectivity process in selectivity

British university accounting


141
Christopher Humphrey and Peter Moizer
University of Leeds, School of Business and Economic Studies, UK, and
David Owen
University of Management School, Sheffield, UK

Sarah (dean of the faculty), to Roger, head of the accounting department, at a cheese and wine
party to welcome new staff): Hi, Roger. How are you?
Roger: Well things are looking good. I think we are going to get a 5. It might be touch and go
but I’ve been told that we’re almost there (but keep that to yourself… James told me the news
but I’m not supposed to tell anyone). Refereed articles are up on the last two years. We could
do with a few more external grants but I don’t think there is much possibility that Faculty
Board will be able to turn down our bid for two new members of staff. The future’s looking
good for us. With a bit of luck, in the next couple of years, we’ll be able to…
Sarah (interrupting Roger): No. Roger. I meant how are you…personally.
Roger (taken aback): Oh, mmm, uh, oh, fine, Sarah, just fine…er, how are you?

In recent years, research selectivity exercises have played an increasingly


influential role in defining the meaning of life in British university accounting
departments. The purpose of research is now more directly set in terms of
maintaining and, it is hoped, improving departmental research ratings. Highly
rated accounting departments readily advertise their assigned research scores
at conferences, in research applications and student recruitment brochures.
Individual academics are no longer just identified as coming from a loosely
grouped association of accounting colleagues known as a “department”, but are
labelled as belonging to a “5”-,“4”-, or, heaven-forbid, a “1”-rated department
Staff recruitment drives are more commonly associated with the language of the
“transfer market” – of buying the “best that is out there” – as departmental
heads confront the realities of the economic games forced on, or willingly
accepted by, them as university funding mechanisms become increasingly
dependent on assigned research ratings.
Puxty et al. (1994), in expressing concern at the damage being inflicted on
notions of collegiality, autonomy and public policy debate by the growing
influence of assessment exercises, made a powerful call for action and
resistance on the part of academics. Rather than lending legitimacy to such
exercises by their active participation, Puxty et al. saw a clear need for Accounting, Auditing &
academics to construct exposés of the numerous contradictions and Accountability Journal, Vol. 8
No. 3, 1995, pp. 141-164 © MCB
paradoxical consequences of such exercises. A particular obligation was seen to University Press, 0951-3574
AAAJ rest with accounting academics in this respect, given that such external
8,3 systems of measurement and surveillance are frequently inspired and bolstered
by the perceived authority of accounting and auditing techniques (Puxty et al.,
1994, p. 162).
This article presents just such a critique, focusing explicitly on the application
of research selectivity in British university accounting. The nature of research
142 selectivity, with its codified, calculative procedures and pretensions to
objectivity, facilitates abdications of personal responsibility. As Gorz (1989) has
noted, (such) mathematical formulations of reasoning conveniently insulate
against any possibility of self-examination while also dispensing with the need
to give meaning to decisions or to accept responsibility for them. A key intention
of this article, therefore, is to confront individual accounting academics with
some of the questionable effects of research selectivity and to argue that (increas-
ingly less quiet) feelings of self-satisfaction emanating from the relatively
privileged are both myopic and damaging to the higher education system of
which they are a part. A discernible international trend towards institutional
competitiveness in higher education, and the related use of performance
indicators in allocating research funds, suggests that our analysis also has a
clear relevance for academic accounting outside Britain[1].
For Puxty et al. (1994), such a trend is indicative of the progressive “commod-
ification” of academic labour, as work is increasingly measured and rewarded
simply in terms of its contribution to the gaining of a favourable assessment
from bodies responsible for the allocation of resources. Research selectivity, in
their analysis, is a further example of the growing utilization of external
systems of surveillance and control in public sector organizations, both in the
UK and internationally. A particular consequence in the university context is
that:
It also fuels and legitimises the development of managerialism within universities in which a
hierarchical and bureaucratic ethos is privileged over an ethos of collegiality and
professionalism (Puxty et al., 1994, p. 157).

It is the view of this article that, as with most public-sector “accountable


management” reforms of recent years, beliefs as to the merits of research
selectivity are just that: “beliefs”, not grounded in any clearly specified,
understood and proven technology for securing desired reforms or detailed
assessment of the implications of pursuing such reforms. For instance, even in
terms of official claims of research resources being “used to best advantage”, it
is interesting to note a recent leader column of Financial Times which
suggested that the desperate scramble for resources generated by research
selectivity has seen a “wasteful duplication” of research effort across
universities and, ironically, has hindered the intended establishment of a small
core of research-based universities (Financial Times, 1994, p. 19). Further, as
Elton pointed out in an address to the Annual Conference of the Society for
Research into Higher Education in December 1991 (reported in Higher
Education Review, 1992, p. 6), the claim that “picking winners” ensures that
resources allocated for research are used to the best advantage has never been Questioning
tested or evaluated. Unfortunately, such claims often assume a quite the value of
unshakeable status, with practical problems encountered in the reforming selectivity
process being persistently dismissed by those promoting reform as merely
“temporary”, “one-off” set-backs to be overcome with time (see Humphrey et al.,
1993).
In such a context, it is, as Heald and Geaughan (1994) have suggested, 143
possibly more pragmatic to stay quiet, either on the grounds of personal career
aspirations[2] or through a belief that the reforms will dissipate in time and
provide no more than a minor irritation to existing university life. Such silence,
though, leaves agendas to be set by others[3]. Additionally, life in university
accounting in Britain has changed and looks set to continue to change,
promising a two-tier higher education system, comprising the favoured few
institutions and an under-resourced remainder.
The remainder of the article is divided into three parts. The next section
reviews the history of research selectivity in British university accounting. This
is followed by a critical assessment of the efficacy of research ratings exercises
in improving research quality, with particular consideration being given to the
impact and consequences of such exercises for the old polytechnic (now “new”
university) sector. The article concludes by calling for: major changes in
attitudes and approaches to accounting research selectivity, including the need
for greater commitment and respect to be shown to the notion of an accounting
academic community; more openness and public feedback in the determination
of research ratings; and more considered attempts to distinguish humanities
and arts disciplines from their science counterparts whose extensive resource
consumption was seemingly influential in the initial establishment of the
research selectivity process.

The historical purpose of research assessment and an overview of


its application in British university accounting
Government funds to UK universities in the 1980s were allocated by the Univer-
sity Grants Committee (UGC). The initial impetus for the UGC’s involvement in
research assessment derived, not from a concern for educational improvement,
but from the public funding cuts applied to higher education in the early 1980s
(see Jones, 1986, 1994; Sizer, 1988). With the UGC responsible for funding both
teaching and research and the real value of the grant falling year by year, the
Committee publicly stated that selective funding of research was the only means
of protecting the quality of both. The UGC’s Strategy Advice of September 1984
announced the Committee’s intention “to adopt a more selective approach in the
allocation of research support among universities in order to ensure that
resources for research are used to best advantage” (emphasis added). This
statement was translated into the UGC’s first research assessment exercise in
1985-86. Universities were asked to complete a four-part questionnaire covering
various aspects of their research income and expenditure, research planning,
priorities and output. The responses received were considered by the UGC’s
AAAJ subject subcommittees and were graded using four rating scales: excellent,
8,3 above average, average and below average. Accounting was given its own
subject group, which probably arose from the difficulty of classifying it with
another subject group, since accounting groups were associated with both
economics and management. Manchester and the London School of Economics
(LSE) were rated excellent and Bristol and Lancaster were rated as above
144 average. Four departments were rated as average and 11 as below average. The
research ratings provoked considerable reaction, not least from those rated
below average. For instance, in the July 1986 issue of Accountancy, Dr Bob Berry,
then head of the accounting sector at the University of East Anglia, was quoted
as saying:
It seems inconceivable to me why we got such a low rating. It is totally unclear how the UGC
have rated us (Accountancy, July 1986, p. 13)

The UGC was reported as saying that the research rating was based on
published research work and the amount of outside funding received by
departments. Apart from these general criteria, however, the UGC did not
disclose how the ratings were arrived at. Individual accounting departments
were able to write to Professor John Sizer, who chaired the assessment sub-
committee, but no details were revealed as to why departments attracted the
rating they did nor was any appeals procedure allowed.
Evaluation of departmental research output was not wholly new to British
university accounting. The first attempt to assess the research performance of
accounting departments occurred in 1978 when Lyall published his analysis of
26 professional and academic journals between 1972-76. Lyall (1978) counted
the pages attributable to each department on several different bases. In terms of
“pages in refereed journals”, the top five research departments were
Manchester, Lancaster, Edinburgh, Liverpool and London Business School,
whereas using “unweighted standard pages” the top five departments were
Manchester, Heriot Watt, Loughborough, Edinburgh and Lancaster. The advent
of the publication of the British Accounting Association’s (BAA) Research
Register subsequently encouraged further analysis of research outputs. Groves
and Perks’s (1984) examination of the 1984 BAA Research Register (containing
publications between 1982 and 1983) classified the top five research
departments as Lancaster, Strathclyde, Liverpool, LSE and Exeter. Gray et al.
(1987) analysed the 1986 BAA Research Register (covering 1984-1985). They
produced a different top five of Birmingham, Manchester, Warwick, Leeds and
Nottingham – a lack of comparability which was seen as merely reinforcing the
practical problems faced by such an evaluation process.
The UGC’s 1986 ratings did little to break such an observational trend, with
Nobes’s comparison of the inconsistencies between Gray et al.’s rankings and
those of the UGC, concluding that:
The fact that there is little correlation…does not help to determine whether either (evaluation)
is meaningful. These exercises are clearly subject to rapid out-dating and to difficulties of
interpretation” (Nobes, 1987, p. 289).
Such inconsistencies reinforce the above noted concerns as to the lack of Questioning
transparency in the UGC’s ratings. Appeals were made for more information to the value of
be provided as to how particular ratings were arrived at – especially given that selectivity
the UGC’s ratings had a distinctive status compared with those of any previous
assessments in that the underlying official motivation of the ratings exercises
was to direct research funding in a more selective manner. However, the
anticipated financial consequences of receiving a particular research rating, 145
somewhat ironically, did not materialize, with Bourn (1986) noting, in an
accounting context, that the disquiet felt in many institutions as a result of the
assigned ratings was out of all proportion to their relatively small immediate
financial impact.

Continuing research assessment in accounting: from the UGC to the University


Funding Council to the higher education funding councils
The UGC’s next explicit evaluation of accounting departments occurred in
1987-88 when it reviewed “small” accounting groups and departments in
universities in order to assess the scope for rationalization of the teaching of the
subject in universities nationally. While primarily concerned with teaching, the
final report made frequent reference to the relationship between teaching and
research activity. For example, it highlighted how teaching and administrative
pressures had hindered the development of accounting research and called for
more stability in the future:
Rapid growth in the size of the professoriat has hindered the development of research in the
subject by breaking up promising concentrations of able accounting researchers. Young
professors have been saddled with the responsibility of creating new departments and degree
schemes, often at a cost to their own research. A period of consolidation would be beneficial
(UGC, 1988, p. 24).

Later in the report, the UGC lent its unequivocal support to the submission
received from the Conference of Accounting Professors, which had argued
against the establishment of “teaching-only” departments:
In the course of our review, we were concerned to discover that pressures exist which, whether
by accident or design, would have the effect of turning some accounting groups into teaching
only units. These pressures include … increasing selectivity in resourcing within institutions
and by research councils. It is our contention that the arguments which might support the
creation of teaching only groups in some subjects do not apply in the BMS (business and
management studies) and accounting field. Research costs consist mainly of salaries and
economies therefore cannot easily be achieved by exclusive concentration of research efforts
in large units … There are reasons for doubting the quality of university teaching unsupported
by scholarship or research. We recommend that teaching only groups should not be supported.
(UGC, 1988, emphasis added).

In 1988, Times Higher Education Supplement also undertook its Peer Review of
Accountancy (see Times Higher Education Supplement, 24 June 1988). The
survey was based on the replies received from heads of department to a simple
questionnaire, including the question: “Which in your view are the five best
departments in British higher education institutions in your subject bearing in
AAAJ mind mainly the output and quality of research?” The responses closely
8,3 followed the UGC’s 1986 ranking with Lancaster, Manchester and the LSE “way
ahead of the few others to attract more than one vote”. Some departmental
heads were unhappy with the survey, including Professor Tom Lee, then of
Edinburgh University, who was quoted as follows:
This department does not wish to participate in your beauty contest as it would not know how
146 to rank its sister departments. Nor does it wish to be ranked by the latter (Times Higher
Education Supplement, 24 June 1988).

The UGC carried out its second research selectivity exercise in 1989, making
various changes in response to criticisms of the first exercise. The UGC
acknowledged publicly that it had sought to rectify nine main weaknesses (see
HEFC, 1993, para. 5)[4]. The 1989 exercise sought more information concerning
research activities than in 1986 and focused explicitly on the individual units of
assessment rather than on university-wide data. Details of up to two
publications per member of staff were required, in addition to information on
research students, external research income and research planning and
priorities. A common 5-point rating scale was used and expressed in numerical
form. By the time the results were published, the UGC had been transformed
into the UFC and the 1989 UFC research categories were as follows:
● 5 = international excellence in many areas, national excellence in all
others;
● 4 = national excellence with some evidence of international excellence;
● 3 = national excellence in a majority of areas or limited international
excellence;
● 2 = national excellence in up to half of areas;
● 1 = little or no national excellence.
In accounting, the general standard of research performance appeared
relatively low, with only Lancaster obtaining a 5 rating, and Bristol,
Manchester, the LSE and Aberystwyth each receiving a 4 rating. Fifteen (60 per
cent) of the 25 rated accountancy departments obtained only a 1 or 2 rating. In
law, a subject often compared with accountancy, just 22 per cent of the rated
departments received a rating of 1 or 2, with 38 per cent receiving a 4 or 5 rating
(the comparative figure in accounting being 20 per cent).
Following the 1989 review, the UFC strove to place increasing importance
on research ratings as the basis for allocating research funds. Government
policy in this respect had been reiterated in an education White Paper in May
1991 and in subsequent letters from Educational Secretaries of State to the
various national funding bodies. The UFC created a new formula funding
approach for 1991/92 in which the total block recurrent grant was determined
through allocations across the three categories of teaching (T), research (R)
and special factors (S). The allocation of funds for research was made up of
money for direct research (DR), contract research (CR), staff research (SR) and
judgemental research (JR). The money a university received through DR and Questioning
CR was related directly to research grant income received from non-UFC the value of
sources (this had little impact on accounting groups). The SR figure was selectivity
dependent on the total number of UK weighted students while JR was
influenced by the product of weighted student numbers and the research
rating of the group (see Mace, 1993, p. 72). It can therefore be seen that, while
research ratings were now a more explicit revenue determinant, the use of the 147
student multiplier meant that universities could, theoretically, compensate for
any falling research income (caused by poor ratings) merely by expanding
student numbers.
The UFC originally proposed to carry out its third assessment exercise in
1993, but the creation of the HEFCs for England, Scotland and Wales led to the
exercise being brought forward to 30 June 1992, so that the ratings could be
used by the new funding councils in the determination of grants for research
with effect from 1993-94. The exercise now covered all higher education
establishments (including the “old” universities and the ex-polytechnics/
colleges of higher education, now called universities). As with the 1989 exercise,
the one in 1992 differed from its predecessor in several important respects,
although the “improved” system utilized in 1989 still contained seven key
weaknesses (HEFC, 1993, para. 7)[5]. The 1992 exercise required all submitting
institutions to put forward only those staff who were actively engaged in
research, and the exercise was made less retrospective by seeking detailed
information on staff in post on 30 June 1992 (the “snapshot” approach), rather
than those who had been in post at any time during the assessment period. In
recognition of the longer time scale for research in the arts and humanities, the
assessment period for these units of assessment was extended by one year to
four-and-a-half years[6] and work accepted for publication was allowed to be
listed by departments.
The 1992 exercise also introduced changes to the funding formula. The
allocation of research money on the basis of student numbers was phased out,
with the number of active research staff (as at June 1992) being used as the
volume multiplier. The judgemental research allocation was now dependent on
a formula which included the “assigned research rating less 1”, meaning that an
accounting group with a rating of one would receive no judgemental research
funds. The impact of this change meant that departments, in completing their
1992 return, essentially had to gamble on the financial benefits of increasing
research volume by submitting more staff in the category of “active”
researchers against the costs of diluting the overall quality of the departmental
research being assessed. The more marginal the member of staff in terms of
research quality, the greater was the risk that a diminution of quality would
offset any gains in research volume[7]. Despite protestations of the illogicality of
requiring universities to gamble (e.g. see Whittington, 1993), the same system
will be in place for 1996, although the HEFCs have stressed that, in removing
the need for departments to produce a total list of publications, they are
reinforcing the importance attached in the assessments to research “quality”
AAAJ rather than “volume” (HEFC, 1994, para. 24). Nevertheless, they still insist
8,3 (when discussing research active staff) that departments will “need to be aware
that research funding will continue to be influenced by the volume of research
(including staff) assessed” (HEFC, 1994, Annex C, para. 15).
The outcome of the 1992 exercise revealed that of the 31 accounting
departments submitting returns, 12 included 95-100 per cent of their staff,
148 seven entered 80-94 per cent, two 40-79 per cent and ten submitted less than 39
per cent. There was also a reasonable correlation between the eventual rating
and the proportion of staff entered, so that those at the top of the assessment
tended to enter all their staff. The LSE and Manchester each received a 5 rating
(Lancaster, the other 5-rated accounting department in 1989, was assessed as a
part of the Lancaster University Management School under the business and
management studies panel). Eight departments received a 4 rating, six a 3
rating, five a 2 rating, while ten departments (nine of which were in “new”
universities) obtained the minimum 1 rating.
All resource calculations which flow from such ratings had, up to and
including the 1992 exercise, been conducted on a university basis. There had
been no explicit requirement for universities to distribute research funds in
strict accordance with the formulae being used by the HEFCs. This lack of
transparency between the funding calculations of the HEFCs and the internal
allocation within a university had its roots in a perceived need to preserve
university independence[8] and there is some evidence to suggest that the
funding councils originally never had any intention to seek to “follow the
money” through into individual departments (see Swinnerton-Dyer, 1985). Over
the course of the ratings exercises, however, there have been some unofficial
calls and promptings for the funding councils to become more active in
monitoring university expenditure. Nevertheless, some universities continued,
even in the immediate aftermath of the 1992 exercise, to ignore the HEFC’s
desire for the results of research selectivity to strengthen “centres of excellence”.
For example, in a 5-rated accounting department, each “active” researcher
would generate approximately £28,000 of research funding for the university.
However, instead of rewarding the highly rated departments in this manner,
some universities chose to use the HEFC money to strengthen lowly rated
departments – this policy being perceived to be both more attractive in terms of
the university’s overall “image” and an easier way of increasing income in the
next selectivity exercise. As Mace (1993, p. 19) commented, universities have
seemed “to see the law of diminishing returns applying if resources are
allocated to already highly rated cost centres”.
In the run up to the 1996 assessment exercise, however, the HEFCs have
given their clearest indication to date that they intend to trace the way in which
research funds are deployed within both universities and their composite
departments (see HEFC, 1993, para. 83). There have also been reports that the
HEFCs are planning to launch a comprehensive audit system to monitor
university expenditure. Speaking of a pilot audit programme, Graeme Davies,
chief executive of the Higher Education Funding Council for England, noted
that “it is a major exercise and will get progressively more intensive. And if we Questioning
uncover data suggesting that funds are being used incorrectly, we will look the value of
seriously at the nature of the funding for the institution concerned” (Times selectivity
Higher Educational Supplement, 4 March 1994, p. 1 ).

The efficacy of ratings exercises in improving research quality


Given the large number of management reforms in public sectors around the 149
world (see Hood, 1991; Humphrey et al., 1993), talk of the need for audits,
“tighter” processes of financial accountability and a greater adherence to
funding formulae in public-sector organizations are hardly novel, perhaps even
“natural”. However, such reforms consistently presume that the accountability
mechanisms being imposed will provide the sought-after answers about
expenditure patterns and that the behaviour of those delivering educational
services will be sufficiently conditioned to provide such services in the desired
(enhanced) mode. Yet, such presumptions are not borne out by practice, with a
whole range of accountable management technologies and monitoring
mechanisms failing to secure what was promised or expected (see Broadbent
and Guthrie, 1992; Gray and Mitev, 1995; Humphrey et al., 1993). Further, and
worse, the questions that seem to get asked about such reforms seldom move
beyond the procedural. Thus, in relation to selectivity in research funding, the
debate is increasingly dominated by questions about how to maximize
individual ratings, to secure the flow of money promised from such ratings or to
check whether such money flows have occurred. There is little discussion as to
whether the process of research selectivity has increased, or merely protected,
research quality. As Power pointed out, “research selectivity has undoubtedly
changed the behaviour of academics and their departments – but whether it
does anything to generate high-quality research, which is supposed to be the
purpose of the exercise is highly questionable” (quoted in Times Higher
Educational Supplement, 22 April 1994, p. 3). This part of the article focuses on
this issue by directly addressing the basic question as to whether the research
ratings exercises have improved research quality in accounting.

Defining and measuring research quality in accounting


In order to begin to address the question as to whether moves towards research
selectivity have improved research quality in accounting, it is helpful initially to
examine the intermediate question as to how “quality” research in accounting
has been defined and measured.
In terms of defining research, the research assessment process has
experienced considerable problems. The 1992 exercise made an explicit attempt
to reward “applied” research, in addition to the previously rewarded “basic” and
“strategic” research (a revision made in the light of received criticisms of the
1989 exercise). Research returns across subjects, however, included very few
applied research publications, a not surprising outturn given that the HEFCs’
circular explaining the assessment rules had stated that “no more than a minor
proportion of funds for basic and strategic research is likely to be allocated
AAAJ selectively by reference to the volume of applied research” (see HEFC, 1993,
8,3 para. 53). The standing of applied research has, subsequently, been raised
further for the purposes of the 1996 assessment exercise[9] with the HEFCs’
explanation giving some revealing insights into the independence of “peer-
review” panels:
In accordance with the Government’s policy for publicly-funded research set out in the science
150 and technology White Paper Realising Our Potential (Cm 2250), panels will be instructed to
give full recognition to work of direct relevance to the needs of commerce and industry, as well
as to the public and voluntary sectors. All research, whether applied or basic/strategic, will be
given equal weight (HEFC, 1994, para. 11, emphasis added).

Such a decision, and the accompanying talk of placing industrialists on the


assessment panels, drew a notable expression of support from the University of
Central England, which had refused to participate in the 1992 assessment
exercise on the grounds that the emphasis on “pure” research took no
consideration of the institution’s particular strengths (see The Times
Educational Supplement, 14 October 1994, p. 4). Significantly, on the same day
The Times Educational Supplement also reported the “deep concern”(p. 44)
expressed by university mathematics departments that their research was in
danger of being ignored or undermined by the absence of immediately
identifiable industrial “consumers” and the pressure to establish direct wealth
generating connections for research projects.
The issue of interdisciplinary research has also been a problem. In utilizing a
system of assessing departments in subject groups or “unit of assessments”
and choosing to judge quality on levels of national and international excellence
in the subject area, the status of interdisciplinary research work has become
uncertain. In the 1992 exercise, the requirement for individuals to be returned
only once by an institution led to few single submissions by interdisciplinary
research groups. The members of these groups were generally apportioned to a
chosen “single subject” unit of assessment. The HEFCs stressed that they
would treat any such research groups with proper consideration, but it remains
an open question as to the negative effect such assessment rules and
perceptions have had on the institutional standing and development of
interdisciplinary research.
Operational problems relating to the definition of research, however, pale in
significance when consideration is given to the task of measuring research
“quality”. The UFC placed great store on the need for a common interpretation
of the rating scale across subjects, asserting in its review of the 1992 ratings
exercise that:
Common adoption and interpretation of the five point scale and the absence of any statistical
normalisation mean that the 1992 ratings of different subjects are comparable (UFC, 1992a,
para. 18).

Despite the above assertions of the UFC, and the acknowledged significance of
research ratings to the process of allocating funds, a relatively widespread view
among assessment panels was that the rating scale of 1 to 5, and associated
definitions, was difficult to apply, especially in the arts and humanities, where it Questioning
was suggested that a different scale could have been more helpful (HEFC, 1993, the value of
paras 46-7). selectivity
The 1996 rating exercise will utilize a 7-point scale, but the possibility of any
past or future ratings exercise living up to the above assertions of the UFC is not
likely, given the essentially subjective nature of the whole ratings process:
151
The first conclusion must be that the research rating exercise is ultimately a matter of
subjective judgement, however many “objective” measures are fed into it. This means that
there are no simple statistical rules, such as “publish n papers in refereed academic journals”
which will guarantee a high or improved rating. Given the essentially arbitrary nature of such
rules, this is probably no bad thing (Whittington, 1993, p. 393).

The HEFCs offered no guidance on the “weights” panels might give to different
aspects of research, such as journal articles, books, research studentships, the
generation of research income, research potential and plans for the future (see
O’Brien, 1994). It is known that some panels relied more heavily than others on
statistical analyses of performance indicators, while others, as in accounting, took
a more pragmatic view of research quality (“we know it when we see it” –
Whittington, 1993, p. 385). Interestingly, despite claims that the research rating
process contains no simple rules as to how to secure a high rating and was a
complex professional assessment of research quality dependent on “the
competence and integrity” of panel members (see Whittington, 1993, pp 390-3),
statistical surveys of the ratings exercise have portrayed a rather more
predictable process in some subjects (see the July 1994 report of the Joint
Performance Indicators Working Group of the HEFCs). For instance, in the case
of business and management studies, eight of the 12 possible elements of research
output[10] had contributed significantly to the research ratings awarded –
whereas, in accounting, only three of the factors correlated significantly with the
research ratings (these being articles in academic journals, total publications and
short works).
Ratings across units of assessment also vary widely, the extent of dispersion
again casting doubt on the comparability of the assessment methods applied by
subject panels. For instance, in 1992, 65 per cent of departments assessed in
anthropology were awarded a 4 or a 5, compared with 32 per cent in accoun-
tancy, 27 per cent in sociology, 14 per cent in business and management studies,
and just 12 per cent in social work (see O’Brien, 1994). In interpreting such
differences it could be claimed that some assessment panels retained a greater
degree of loyalty to their subject than to the ratings exercise. For instance, in
accountancy, among the 12 departments which were strictly comparable, the
average rating improved from 2.5 in 1989 to 3.5 in 1992, and in no case did the
rating decline. However, in law, in a period in which it is widely acknowledged
that law departments had devoted unprecedented efforts to research, only three
were deemed to have improved their research quality, while ten departments had
their rating downgraded.
AAAJ The impact of ratings exercises on research activity
8,3 Having acknowledged that research ratings exercises are essentially subjective
processes in terms of their assessment of research quality, it may seem a little
strange to ask the question as to the extent to which such exercises have
improved research “quality”. However, such questions are essential if the
purpose of the assessment exercises is not to be lost. More significantly for this
152 article, the answers to such questions provide powerful illustrations of the
fundamental limitations of research assessment exercises in their stated task of
improving research “quality”.
Surprisingly, the HEFCs (and their predecessor, the UFC) are rather cautious
on this issue. Despite claims that the 1992 ratings were comparable across all
units of assessment, no such confidence is applied to any comparisons of
“research quality” across time:
Although the same rating scale has been used in both the 1989 and 1992 Exercises, with the
same definitions, the 1992 Exercise has been carried out on a different basis and the results
should not be directly compared with the earlier ratings (UFC, 1992a, para. 17).

One does not have to look very far to find other indications that the ratings
reveal very little about changes in research quality. Perhaps some of the most
severe criticisms of research selectivity have been directed at its encouragement
of increased output at the expense of quality. For instance, Times Higher
Education Supplement (4 December, 1992, p. 1) highlighted work which
suggested that, while the number of articles published by British academics in
leading scientific journals had increased, the number of citations of British
papers went down (in contrast to the rest of the European Union and the USA
where citations increased). Sir David Phillips, head of the advisory board for the
research councils, pointed out that in the past many dramatic scientific
breakthroughs had been achieved by scientists who had worked on projects
even for ten years without publishing anything. He commented:
I suspect many scientists have been changing their behaviour or even the nature of the
research they do in order to optimise their performance in the RAE. If that leads to people
always doing research that leads to publishable research in three years’ time, I certainly do not
think it is a good thing (Times Higher Education Supplement, 4 December 1992, p. 1).

In accounting, it is clear that the number of research publications has dram-


atically increased over the last 15 years or so. For instance, Puxty et al. (1994, p.
142) note that the absolute number of publications increased sixfold between
1980 and 1992, while the number of publications per member of staff increased
2.44 times. To what extent quality has increased is a more open issue.
The 1992 Accountancy Assessment Panel, for instance, felt unable to commit
itself to any very solid statements concerning the changing quality of accounting
research between 1989 and 1992. As Whittington (1993) commented when
considering the average rating increase in accounting: “this suggests an
improvement in the quality of research during the period, although in a subjective
assessment such as this, it is possible that it also reflects a greater generosity in
the judgements of the 1992 panel” (Whittington, 1993, p. 392). Further, the more
one delves into the process of determining research quality, the more messy and Questioning
debatable it appears. For example, the Accountancy Panel’s assessment of the value of
“international excellence” on the grounds of “we recognise it when we see it” selectivity
(Whittington, 1993, p. 385) implies a degree of agreement that clearly does not
exist among accounting academics. For instance, The Accounting Review is often
portrayed as a quality research journal, largely on its historical status as the
leading American academic accounting journal, but it is scornfully dismissed by 153
some Americans:
So long as The Accounting Review (and its clones) continue to dominate the market for
accounting research, and hence the system of promotion and tenure at most institutions, and
so long as mainstream institutions within the academy and the profession systematically
exclude dissident voices, the decline of the accounting profession will be assured (Neimark,
1993, p. 14).

Likewise, in a masterly exhibition of polemic, Arrington berates mainstream


American accounting academics for subscribing “to a form of libertarian
economic voodoo masquerading as science” (Arrington, 1990)[11].
Such discussions as to the determination of research quality, however, rarely
surface, with the main effect of research selectivity being to generate the game
of “rating improvement”. This was classically illustrated in Whittington’s
discussion of the subjective nature of research assessment in accounting. In
concluding that the lack of any clear-cut rules for securing a high rating is
“probab1y no bad thing” (Whittington, 1993, p. 392), he then went on to specify
the ways in which a good rating could be secured:
There are, however, certain obvious things which departments can do which will increase the
likelihood of a favourable rating. Publishing in refereed journals is certainly one of them, as is
publishing work of good research quality through any medium. Publicly available research
output will always be the most convincing evidence of quality. However, this will be enhanced
by other marks of approval and evidence of activity, such as research grants awarded or
research students supervised. Above all, the whole picture will be made more coherent and
convincing by a well-conceived research plan: this was something to which sufficient
departments did not devote sufficient attention in the 1992 exercise (Whittington, 1993).

Whittington’s comments are of a particular significance because his article


represented the main public feedback to the accounting research community as
to how the panel had arrived at its ratings. This form of “general” feedback had
been the HEFCs’ favoured mode of communication for what they regarded as
essentially confidential deliberations (HEFC, 1993, para. 30). In commenting on
the 1992 exercise, the HEFCs had noted that several assessment panels did not
utilize the available facility to make confidential comments to university heads
– “in part because of an unwillingness to encourage a dialogue with
institutions” (HEFC, 1993, para. 92), even though such institutions had stated
that they would welcome more detailed feedback. The HEFCs felt that such a
feedback facility could be used on a wider basis but stressed that “it would have
to be on the clear understanding that no further correspondence would be
entered into” (1993, para. 92).
AAAJ The HEFCs’ emphasis on terms such as “confidentiality” and the intent to
8,3 limit the provision of details of individual departmental ratings, together with
the absence of any right of appeal against such ratings, sits somewhat
uncomfortably with the main avowed intent of improving research quality
(UFC, 1992b, Annex E, para. 1). Research publications are, by definition,
publicly available. Documentation submitted by departments is subject to
154 audit. Research ratings are published by the HEFCs for each department. In
fact, the element of the research assessment process which was kept most secret
was the way in which the ratings for individual departments were determined
– which for any department intent on improving research quality is perhaps the
thing it needed to know most. This inevitably raises the suspicion that the
HEFC wished to avoid accountability for such an arbitrary process.
There is a wealth of published research concerning performance measure-
ment in the public sector which has highlighted the distorting effects of placing
too much emphasis on singular performance indicators – to recognize that they
are little more than triggers for further investigation and not ends in themselves
(e.g. see Hopper, 1986; Hopwood, 1985; Humphrey et al., 1993; Mayston, 1985;
Pollitt, 1986). In the field of public-sector auditing and inspection, significant
efforts have been made to ensure that auditor/inspector’s reports are made
available to those being audited and those charged with the responsibility of
responding to critical comments and improving performance. Yet, despite this
literature, university departments in the wake of three research assessment
exercises are left to scrabble around to discover how their particular rating was
determined and what must be done to improve it. If the HEFCs were intent on
using selectivity to improve research quality, departments would be receiving
detailed reports of the way in which their rating was determined.

The implicit hypocrisy of research selectivity – the case of “new” universities


The questionable impact of research selectivity processes in terms of seeking
general improvements in research quality is most effectively illustrated by
considering the case of the “new” universities.
In considering the treatment of the “new” universities in the 1992 assessment
exercise, the HEFCs concluded that they were given a full and fair assessment
by panels and that there was no apparent bias against a sector where a
“research” culture may have been less developed (HEFC, 1993, para. 98). This
may sound reassuring but, nevertheless, the new universities generally received
low research ratings across research subjects. Without detailed published
evidence of the way specific assessments were reached, it is difficult to verify or
deny the validity of the HEFCs claims. However, even within the information
that has been published, there are some quite strong pointers to the existence of
bias against new universities, at least in the case of accountancy.
One clear example is with respect to the publication of books. The 1992
research assessment exercise made it clear in its definition of research that the
review was concerned with “original investigation undertaken in order to gain
knowledge and understanding”, although it then added that this included
“scholarship” (UFC, 1992b, Annex A). In describing suitable forms of research Questioning
output, the initial guidance stated simply that books should have an Inter- the value of
national Standard Book Number (ISBN) and that “institutions might want to ask selectivity
themselves whether a publication in question is the result of scholarly activity
and whether it has been published externally to the institution, for sale under a
recognised imprint” (UFC, 1992b, Annex E, para. 8). From the above, it could be
claimed that published textbooks, bringing together a subject literature for the 155
purpose of disseminating it to a, wider, student and professional audience, would
fall into the classification of a “scholarly” work and potentially count as research
for the purposes of the assessment exercise.
In accountancy (where nine of the ten 1 ratings in the 1992 assessment exer-
cise went to “new” universities – meaning that they would receive no funds for
research activities), the indications are that such textbooks were not treated as
research, with the low ratings being attributed to:
a misunderstanding of what is meant by research. Many of those [staff] submitted [from the
new universities] for assessment had published text books or other teaching material which
did not represent the creation of new knowledge (Whittington, 1993, p. 388).

Whittington spoke quite disparagingly of what he saw as a failure to read the


UFC’s guidance carefully. “The Circular provided very clear guidance on the
definition of research and the procedures to be followed by each panel in
assessing its quality. The Accountancy Panel worked hard to follow this
guidance: not all of those departments submitting seemed to have done so”
(Whittington, 1993, p. 394). From the above, it could be suggested that a panel
which contained more than just the one representative from a new university
could well have arrived at a different interpretation of the meaning of “research
output”. This is especially so given the evidence provided by Gray and Helliar
(1994, p. 244) that accounting staff in the “new” universities who do publish, do
appear to be relatively more prolific than those in the “old” universities across
many forms of publication with one key exception – “premier research
journals” (the mode of publication central to the accountancy panel’s 1992
“research” assessments)[12]. Further, as regards the claimed clarity of the 1992
guidance with respect to research definitions, it is worth pointing out that the
definition to be utilized for the 1996 exercise still includes “scholarship”, but an
asterisk now notes that “scholarship embraces a spectrum of activities
including the development of teaching material; the latter is excluded from the
research assessment exercise (HEFC, 1994, Annex A, emphasis added).
A more recent example where the research selectivity exercises can be seen to
be continuing to work against the “new” universities is with respect to the issue
of when to recognize “published output”. In the 1992 exercise, “published
output” had to have been published, or accepted for publication, during the
period 1 January 1989 to 30 June 1992 (UFC, 1992b, Annex E, para. 4). For the
1996 exercise (see HEFC, 1994, Annex C, para. 41), only work published
between 1 January 1992 and 31 March 1996 will be accepted as “published
output” (in all but humanities and arts subjects where the assessment period is
AAAJ two years longer, going back to 1 January 1990). This switch in approach was
8,3 apparently made on the grounds that some assessment panels had difficulty
obtaining copies of forthcoming publications, although the power of such an
argument is somewhat reduced when it is noted that some panels (such as those
concerned with hospital-based clinical subjects with over 4,000 active
researchers) did not read cited publications, assessing quality in terms of the
156 name of the journal in which the articles were published (HEFC, 1993, para. 23).
The change is also surprising given that a major criticism of previous exercises
was that they had been retrospective and taken little account of work in
progress and research potential (HEFC, 1993, paras 5, 7).
For many departments in “new” universities, which have only recently
started to try to build up a research culture, such a switch in assessment rules
could be severely damaging. While the HEFCs can claim that they have
encouraged research quality not quantity in the 1996 exercise by deciding not
to require departments to produce a complete list of departmental publications
in the assessment period and asking research-active staff to submit what they
regard as their best four pieces of published output, the reality in many “new”
university departments is that quantity will be the key driver of research
publication activity. For instance, new lecturers starting in a “new” university
accounting department in 1992 would have to publish one refereed academic
article per year to stand in a comparable position to most active researchers in
established accounting departments in “old” universities (who could do the
same over a longer period and with a likely lower teaching load). If this seems
unfair, it should be pointed out that it is not an isolated anomaly in the ratings
process. The assessment exercises have persistently chosen to ignore the
differential contexts within which academic staff are working and sought to
base everything on a supposedly absolute standard of research output quality.
Each research assessment point represents a differing attainable level of
national or international excellence, where “attainability” is defined as:
an absolute standard of quality in each unit of assessment, and should be independent of the
conditions for research within individual departments (HEFC, 1994, Annex B, note 2; UFC,
1992b, Annex C, note 2).

Such a lack of consideration of input-output relationships seriously questions


the claims of research assessment to be improving research quality, and points
to the need for more serious consideration to be given to the fundamental
structural implications that such assessment exercises (and their defined
“rationale”) have for the provision of university education.
While the Accountancy Assessment Panel hopes for improvement in the
research ratings of the new universities (see Whittington, 1993, p. 392), it is
clear from the above that they will not enter the 1996 research assessment
exercise on a level playing field with the old, established universities.
Throughout the 1980s, the former polytechnics suffered a significantly steeper
reduction in average unit costs than the traditional university sector – leaving
academic staff with approximately 100 per cent higher teaching loads than their
counterparts in the traditional universities (see Hutchinson, 1989; Puxty et al., Questioning
1994). When coupled with new recruits to accounting departments in the new the value of
universities predominantly not having a traditional academic background selectivity
(Gray and Helliar, 1994), and with administrative pressures hindering staff
development activities (Weetman, 1993), new universities would seem to have
little realistic expectation of achieving high research rankings in the next
selectivity exercise. To develop a research ethos where little or none has existed 157
before takes time and resources, both of which are seemingly being denied to
new universities under the current funding mechanism.
The clear implication of the above is that the current emphasis on research
selectivity and the particular way in which the process has been interpreted and
applied in accounting looks set to assign many of the new universities
permanently to a second-class teaching-only status. Ironically, this is the very
status to which, as noted earlier (see p. 145), the Conference of Accounting
Professors was so adamantly opposed just seven years ago because of concerns
as to the quality of university teaching unsupported by scholarship or research.
Today there is little discussion of such concerns. “Teaching-only” status is seen
to be an inevitable consequence of the need to protect the quality of accounting
research. There is also a tendency to over-glamorize the past pedagogic
achievements of the former polytechnics, as exhortations are made for the
established universities to continue the “cutting edge” task of research and for
the new universities to go back to what “they were good at” – basic teaching
and training (see Financial Times, 1994, p. 19).
Nevertheless, the validity of the past concerns of accounting professors
remains. The critical motivation for research selectivity rested originally on the
expensive nature of research in the physical sciences. As Swinnerton-Dyer put
it, “the debate about research (selectivity) is primarily about research in the
sciences, not least because the identifiable costs are predominantly there”
(Swinnerton-Dyer, 1985, p. 13). However, the conduct of accounting and other
social science research, unlike medicine or astrophysics, does not depend on the
existence of one or two multi-million pound, hi-tech operating theatres or space
centres. Further, as Tinker points out, “accounting does not wait to be
discovered like an atomic particle or a pulsar” (Tinker, 1985, p. 206). On the
contrary, much of what passes for accounting “research” is concerned with a
new interpretation of existing knowledge, and in principle can be readily
assimilated by researchers in their accounting undergraduate teaching material
(see Owen et al., 1994).
If, as Brownstein (1989) argues, education is not simply about imparting the
most-up-to-date (technical) information on a particular subject, but is also about
extending that information base and transmitting a general, critical, cognitive
skill, there has to be grave doubt as to the educational status of the type of
technical accounting training that can be expected to be forthcoming from
teaching-only accounting departments. In such a context, it may be more
appropriate to drop the artificial language of “research and teaching” and
“teaching-only” institutions and to speak explicitly of a future which will
AAAJ increasingly see the co-existence of a small, well-funded, “élite” set of account-
8,3 ing departments developing and imparting new accounting knowledge and a
“third-rate” rump, struggling with an inadequate resource base, out-sized
classes, heavy teaching loads, poorly stocked libraries and out-dated, essen-
tially tedious, curricula. There are also major social implications associated
with maintaining such a division. British universities have tended to have an
158 over-representation of middle-class students, at the expense of their working-
class compatriots (see Burgess, 1978; Mar Molinero, 1986), with private schools
educating only one child in 15, but providing one-quarter of British university
students (Hutton, 1994). Traditionally, however, the former polytechnics’
student intake was characterized by a higher proportion of working-class, and
mature, students, many with relatively poor entry qualifications compared with
the traditional university sector. Additionally, many, though admittedly not all,
polytechnics made efforts in the 1980s to recruit various kinds of under-rep-
resented groups into their student population – for example, inner-city
residents, members of ethnic minorities and women in traditionally male
disciplines (Fulton, 1991). As Mar Molinero (1989) points out, redistributing
resources supposedly to protect excellence may ultimately result in discri-
mination against these latter groups who are most in need of support.

Redefining research selectivity and displacing the pursuit of self-


interest
This article is concemed to stimulate critical reflections on the current role and
form of research selectivity in British university accounting. The analysis of the
various research selectivity exercises conducted in recent years and of ones
planned for the future has severely questioned claims that research selectivity
ensures that “resources for research are used to best advantage”.
Despite claims to the contrary, the current process of research selectivity as a
means of funding academic institutions looks as arbitrary and subjective as
many of its predecessors. More significantly, there are strong grounds for
suggesting that the relatively minor cost of accounting research largely negates
the need for research selectivity and severely challenges the educational merits
of concentrating research resources in a few “centres of accounting excellence”.
While the 1980s did see some marked advances in accounting research (see
Puxty et al., 1994 for a discussion), accounting still remains a fledgling
academic discipline, generally dismissed as a technical subject, lacking in
academic rigour by many of our colleagues in the longer established social
sciences and viewed at best as instrumentally useful in its role as an
institutional “cash cow”. At a time when the above developments in accounting
research should be being consolidated and disseminated between, and beyond,
accounting scholars, research selectivity processes are increasingly
undermining the value of establishing and working within a broad-based
research community.
Instead of talk of academic freedom of thought, an open exchange or sharing
of ideas and the need to build a sound, scholarly basis for a university career,
research selectivity is promoting the language of self-interest, marketing and Questioning
entrepreneurship. There is talk of individuals not being encouraged to work with the value of
people outside of their own institution, for fear that it will dilute subsequent selectivity
research ratings[13]. Rather than generating research quality by diligent and
caring supervision, the simple task now is to acquire “star players” before the
next assessment deadline, with the more highly rated departments increasingly
able to finance such activities where “research monies follow research ratings”. 159
Individually, academic reputations depend increasingly on a, continuing, ability
to “churn-out” publications. This applies especially in the early years of an aca-
demic career. However, even recognized “quality” researchers are not immune,
running the risk of being labelled (at least informally) as having passed their
“sell-by date” if they slacken off in their productivity – notwithstanding that they
have been passing their knowledge and experience to others in ways that do not
immediately generate improved ratings points.
For those who have given some consideration as to where this is all leading,
the outlook does not look very rosy. As Willmott pointed out, research selec-
tivity and allied developments encourage:
academics willingly to restrict their work to those duties and activities that provide the
greatest measurable output for the lowest risk and least effort…Substantive quality of student
contact and research preparation will decline as formal measures of teaching and research
activity will doubtless improve, and thereby invite and legitimise the further extension of the
disciplinary power of selectivity exercises and quality audits (Willmott, 1995, emphasis in
original).

In this light, is it that implausible to see a future in academic accounting in


which staff are actively discouraged from refereeing for research journals;
seldom participate actively in research seminars or conferences; only undertake
the supervision of PhD students if a publishable paper looks likely; are tempted
to referee unfavourably (i.e. reject) papers which they suspect as emanating
from rival institutions; or seek to set-up their own journals so as to provide a
guaranteed outlet for departmental publications?
The expression of concern over current and prospective scenarios implicitly
begs questions as to the shape of any alternative approaches to improving
research quality and the general desire for, and the capacity to secure, change
on the part of accounting academics. Given the fact that the selectivity exercises
are being conducted by senior accounting academics, as a process of peer
review, the capacity to secure changes in the way such exercises are carried out
is clearly there. Two specific changes come readily to mind. First, if the
intention of research selectivity is to improve the general standing of
accounting research, rather than merely protect its practice in select
departments, there is a clear need for more information to be fed back to
departments so that they can see in what ways their research was deficient and
what steps they would have to take to improve their situation. Second, the
current discrimination against the new universities needs to be brought to an
end. There needs to be an extensive debate as to what is meant by research in
accounting, with due recognition being given to the diverse nature of acts of
AAAJ research or scholarship in any future assessment exercises. There also should
8,3 be an explicit attempt to relate the research output achieved by departments to
the resources at their disposal.
Such changes carry some risk to those presently reaping the benefits of
research selectivity, either by undermining any claims to objective, systematic
research evaluation or by reducing the awarded level of research funding. For us,
160 such risks and potential costs are outweighed by their benefits. In the first place,
any system of research funding has to be judged on the claims being made for it
and not allowed to hide behind any form of pseudo-objectivity. Second, it is in the
wider interests of the academic accounting community if people begin to move
away from assessing funding decisions purely on the basis of “what’s in it for
me”. For all the promotion of the merits of entrepreneurial activity, there is a
wealth of evidence and argument in the philosophical and ethics literature which
shows how individuals pursuing their own self-interests can independently take
courses of action which ultimately prove to be worse than if they had co-
ordinated their potential choices of action (as in the classic ethical scenario of the
prisoners’ dilemma – see Mackie, 1986, p. 115). Or as Ken Livingstone so
cogently put it:
If some early Thatcherite…hunter gatherer had suddenly announced “sod this co-operative
group, I’m an individual”, and set off to make his fortune on his own, he would merely have
starved to death or provided a tasty morsel for a passing sabre toothed tiger (Livingstone,
1989, p. 250).

In terms of accounting research, it can be argued that maintaining the current


competitiveness and narrowness of the research selectivity process is likely to
produce a decline in the perceived value of the accounting discipline. Aside from
the stigmatizing effects of low ratings received by virtually all the new
universities (and the resulting limited resource deployment), it can hardly help
claims that accounting has a rightful place in the list of established social
science subjects if a significant proportion of accounting departments continue
to be rated as inadequate in research terms. This point has an additional
significance when it is recognized that, as yet, the HEFCs have chosen not to
review the allocation of research resources across subject disciplines, but have
focused solely on the allocation of funds within subject groups despite the quite
clear anomalies in the current nature of the relative allocations between
subjects.
For those who remain unconvinced of our calls for less competitiveness in
accounting research, we cannot offer any absolute proof that a future, more co-
operative, world will generate higher-quality research. All we can say is that
research selectivity does not appear to have lived up to its promises and that, to
avoid an endlessly expanding cycle of more assessment, more evaluation
criteria and more competition, explicit counter-action is needed. In the course of
writing this article, we have come into contact with a significant number of
accounting academics, some within top-rated departments, who have privately
expressed their dissatisfaction with the current selectivity system[14]. In
encouraging a more public expression of such concerns, and a widening of the Questioning
debate over research selectivity, we hope that we have alerted the academic the value of
accounting “community” not to repeat the errors perpetrated in the aftermath selectivity
of government cuts implemented in the early 1980s:
The strongest academics did not care enough for the fate of the weaker ones, even though they
must have known that those who teach in the more privileged institutions are often very
similar in terms of ability to those who work in a polytechnic or in one of the less prestigious 161
universities. The sense of co-operation between academics has been ruptured, because the
academics co-opted by the government to allocate resources proved themselves too willing to
do what was asked of them and not keen enough to stand up for higher education in general
(Kogan and Kogan, 1983, p. 149).

Notes
1. For example, see Murphy’s (1994) discussion of recent developments in Australia.
2. A recent report in The Guardian’s education supplement (1994, p. 2) on the impact of
market-led approaches in universities included the following observation from Colwyn
Williamson, founder of the Council for Academic Freedom and Academic Standards: “The
disincentive to exposing what you see as serious academic problems are considerable. For
a start there’s a general demoralisation among academics at the moment. Every stage of
your progress is subject to the goodwill of your superiors. Every year there are increments
you may be granted or denied...everything depends on not making enemies of your
superiors”.
3. For instance, by a government which aside from advocating research selection has, over a
13-year period, given out £31 billion in income tax cuts to the predominantly wealthy (the
top 1 per cent of taxpayers receiving 27 per cent of this amount – see Dean, 1993).
4. These included, among other things, that: the criteria for assessing research quality had
not been made clear to universities; interdisciplinary research had not been properly
assessed; different assessment standards had been used for different subjects: an appeals
mechanism had not been in existence, and the exercise had been largely retrospective,
taking little account of work in progress and research potential.
5. Most significantly, these still included a lack of clarity regarding assessment criteria and
the continuing retrospective nature of the assessment exercise.
6. Accounting was included under arts and humanities. although for the forthcoming 1996
exercise it has been excluded.
7. In general terms, as the effect of an improvement in research rating was greatest at the
bottom of the scale, it would have been beneficial for a group expecting to receive a 1 rating
(if all its staff were included) to exclude its most marginal researchers in the hope that this
would move the rating up to a 2, since this would increase the income from zero. The
position for those expecting middle-ranking ratings was much less certain, since a group
which might have got a 3 if all staff were included, might suppose that it would get a 4 if,
say, two members were excluded. However, if the forecast was wrong and the group still
received only a 3 the group would have lost two members of staff who would have qualified
for the per capita research funding allocations.
8. The maintenance of this position was also a possible reflection of a high degree of
arbitrariness in the way the funding councils apportioned research funds across subject
groups. For instance, an accounting researcher is valued at over £7,000 per rating point
and a management researcher at just over £4,000 per rating point. Similar inequities exist
in other subjects, e.g. chemistry and physics, where a research chemist is valued at
considerably more than a research physicist.
AAAJ 9. Although it could be argued that applied research was compensated through external
research funding – with it being easier for “applied” than for “pure” subjects to secure such
8,3 funding from industry and commerce.
10. These were: authored books, edited books, short works, refereed and other conference
papers, articles in academic, professional and popular journals, reviews of academic
books, other publications and output, and total publications.
11. For Arrington, a major problem in the accounting discipline lies in the fact that
162 “prescriptions about what constitutes ‘good’ research have been laid out by a hegemonic
academic élite whose ideas read like the antithesis of current philosophy of science” (p. 6)
Similar concerns have been voiced in economics and business management by
practitioners as well as academics about the relevance and value of such research, (for
example, see Harley and Lee, 1995; Harvard Business Review, September-October and
November-December 1992).
12. Hutchinson (1989) makes a similar point when drawing attention to the wider definition of
research adopted by the body responsible for maintaining academic standards in the old
polytechnic sector, the Council for National Academic Awards.
13. This is particularly true of joint research grants, where funding bodies require the grant to
be assigned to one person and therefore one institution, preventing collaborating
researchers in different universities from including the grant in their department’s
submission.
14. The relative absence of such policy critiques in accounting journals should not be seen as
too convincing counter evidence – after all, as Puxty et al. (1994) have stressed, such
activity on the part of academic accountants currently receives little reward in the research
ratings process.

References and further reading


Accountancy (1986), July.
Arrington, E. (1990), “Intellectual tyranny and the public interest: the quest for the grail and the
quality of life”, Advances in Public Interest Accounting, Vol. 3, pp. 1-16.
Bourn, M. (1986), “Fighting Back over the Funding Issue”, Accountancy, Vol. 98, November, p. 25.
Broadbent, J. and Guthrie, J. (1992), “Changes in the public sector: a review of recent ‘alternative’
accounting research”, Accounting, Auditing & Accountability Journal, Vol. 5 No. 2, pp. 3-31.
Brownstein, L. (1989), “University education in a free society: a preliminary defence”, Higher
Education Review, Vol. 22 No. 1, pp. 6-20.
Burgess, T. (1978), “Excellence or equality: a dilemma in higher education?”, Higher Education
Review, Vol. 10 No. 2, pp. 41-54.
Dean, M. (1993), “Will it be tax-man or axe-man?”, The Guardian, 11 September, p. 23.
Elton, L. (1986), “Research and teaching: symbiosis or conflict”, Higher Education Review, Vol. 10
No. 2, pp. 41-54.
Financial Times, (1994), 11 October, p. 19.
Fulton, O. (1991), “Slouching towards a mass system: society, government and institutions in the
United Kingdom”, Higher Education, Vol. 21, pp. 589-605.
Gorz, A. (1989), Critique of Economic Reason, Verso, London.
Gray, R. and Helliar, C. (1994), “UK accounting academics and publication: an exploration of
observable variables associated with publication output”, British Accounting Review, Vol. 26
No. 3, pp. 235-54.
Gray, C. and Mitev, N. (1995), “Management education: a polemic”, Management Learning, Vol. 26
No. 1, 1995.
Gray, R.H., Haslam, J. and Prodhan, B.K. (1987), “Academic departments of accounting in the UK: Questioning
a note on publication output”, The British Accounting Review, Vol. 19 No. 1, April, pp. 53-71.
Groves, R.E. and Perks, R.W. (1984), “The teaching and researching of accounting in UK
the value of
universities, a summary”, The British Accounting Review, Vol 16 No. 2, Autumn, pp. 10-19. selectivity
The Guardian (1994), Education supplement, 22 March, p. 2.
Harley, S. and Lee, G. (1995), “The academic labour process and the research assessment exercise:
academic diversity and the future of non-mainstream economics in the UK universities”,
Working paper, Leicester Business School, De Montfort University, Leicester.
163
Harvard Business Review (1992), various issues.
Heald, D. and Geaughan, N. (1994), “Formula funding of UK higher education: rationales, design
and probable consequences”, Financial Accountability and Management, Vol. 10 No. 4,
pp. 267-89.
HEFC (1993), A Report for the Universities Funding Council on the Conduct of the 1992 Research
Assessment Exercise, Higher Education Funding Councils for England, Scotland and Wales,
Bristol.
HEFC (1994), 1996 Research Assessment Exercise, Higher Education Funding Councils for
England, Scotland and Wales, Bristol.
Hood, C. (1991), “A public management for all seasons”, Public Administration, Vol. 69, Spring,
pp. 3-19.
Hopper, T.M. (1986), “Private sector problems posing as public sector solutions”, Public Finance
and Accountancy, 3 October, pp. 11-13.
Hopwood, A.G. (1985), “Accounting and the domain of the public: some observations on current
developments”, The Price Waterhouse Public Lecture on Accounting, University of Leeds,
November.
Humphrey, C., Miller, P. and Scapens, R. (1993), “Accountability and accountable management in
the UK public sector”, Accounting, Auditing & Accountability Journal, Vol. 6 No. 3, pp. 7-29.
Hutchinson, C. (1989), “Consistency and stability of UK academic publication output criteria in
accounting: a comment”, British Accounting Review, Vol. 21 No. 3, pp. 279-84.
Hutton, W. (1994), “Time to sever the thin blue line”, The Guardian, 31 October, p. 10.
Jones, C.S. (1986), “Universities, on becoming what they are not”, Financial Accountability and
Management, Vol. 2 No. 2, pp. 107-19.
Jones, C.S. (1994), “Changes in organisational structures and procedures for resource planning in
three British universities: 1985-92”, Financial Accountability and Management, Vol. 10 No. 3,
pp. 237-51.
Kogan, M. and Kogan, D. (1983), The Attack on Higher Education, Kogan Page, London.
Livingstone, K. (1989), Livingstone’s Labour: A Programme for the Nineties, Unwin Hyman,
London.
Lyall, D. (1978), “UK University accounting departments: journal output performance, 1972-76”,
AUTA Review, Vol. 10 No. 1, Spring, pp. 22-5.
Mace, J. (1993), “University funding changes and university efficiency”, Higher Education Review,
Vol. 25 No. 2, Spring, pp. 7-22.
Mackie, J.L. (1986), Ethics: Inventing Right or Wrong, Penguin Books, Harmondsworth.
Mar Molinero, C. (1986), “The social impact of the 1981 cuts in British university expenditure”,
Educational Studies, Vol. 12, pp. 265-78.
Mar Molinero, C. (1989), “A multidimensional scaling analysis of the 1986 ratings of universities
in the UK”, Higher Education Review, Vol. 21 No. 2, pp. 7-25.
Mayston, D.J. (1985), “Non-profit performance indicators in the public sector”, Financial
Accountability and Management, Vol. 1 No. 1, pp. 51-74.
AAAJ Murphy, P.S. (1994), “Research quality, peer review and performance indicators”, The Australian
Universities’ Review, Vol. 37 No. 1, pp. 14-18.
8,3 Neimark, M. (1993), “Looking for Copernicus”, message from the Chair, In the Public Interest,
Vol. 19, June, pp. 1, 14.
Nobes, C.W. (1987), “Publication output and assessment of departments: a note”, British
Accounting Review, Vol. 19 No. 3, December, pp. 289-90.
O’Brien, P.K. (1994), “Research selectivity exercises: a sceptical but positive note”, Higher
164 Education Review, Vol. 26 No. 3, pp. 7-17.
Owen, D., Humphrey, C. and Lewis, L. (1994), Social and Environmental Accounting Education in
British Universities, Research Report No. 39, The Chartered Association of Certified
Accountants, London.
Pollitt, C. (1986), “Beyond the managerial mode: the case for broadening performance assessment
in government and the public services”, Financial Accountability and Management, Vol. 2
No. 3, pp. 155-70.
Puxty, A.G., Sikka, P. and Willmott, H.C. (1994), “Systems of surveillance and the silencing of UK
academic accounting”, British Accounting Review, Vol. 26 No. 2, pp. 137-71.
Sizer, J. (1988), “British universities responses to events leading to grant reductions announced in
July 1981”, Financial Accountability and Management, Vol. 4 No. 2, pp. 79-97.
Swinnerton-Dyer, P. (1985), “Higher Education into the 1990s”, Higher Education Review, Vol. 17
No. 2, pp. 137-72.
Times Higher Education Supplement (1988-94), various issues.
Tinker, T. (1985), Paper Prophets, Holt, Rinehart & Winston, Eastbourne.
UFC (1992a), Research Assessment Exercise 1992: The Outcome, Circular 26/92, Universities
Funding Council, London.
UFC (1992b), Research Assessment Exercise 1992, Circular 5/92, Universities Funding Council,
London.
UGC (1988), Accountancy Teaching in Universities, University Grants Committee, London.
Weetman, P. (1993), “Recruitment by accounting departments in the higher education sector: a
comment on the Scottish experience”, British Accounting Review, Vol. 25 No. 3, pp. 287-300.
Whittington, G. (1993), “Education and research notes: the 1992 research assessment exercise”,
British Accounting Review, Vol. 25 No. 4, 1993, pp. 383-95.
Willmott, H. (1995), “Managing the academics: commodification and control in the development of
university education in the UK”, Human Relations (forthcoming).

You might also like