[csaa-forum] ERA rankings

Ned Rossiter ned at nedrossiter.org
Thu Jul 10 09:40:38 CST 2008


Sadly, if academics do not organize refusal on a mass scale, then  
there is the very real likelihood that those journals ranking C and  
possibly B will face closure - a point Douglas Kirsner also notes in  
yesterday's HES.  This will occur less through a rush by spineless  
academics to higher ranking journals which will not, after all, be  
likely to publish more work just because they receive more  
submissions - to the contrary (and aside from the increased  
administrative burden those journals will face), it will cultivate  
even greater perception about the super quality of those journals  
through the crude logic of scarcity; rather, the closure of lower  
ranking journals will happen because the voluntary labour that makes  
the production of such journals possible will no longer be available  
for various reasons - the impact of increased pressure by  
administrative regimes framed around professional management on 'low- 
ranking' researchers will be a key factor here. Time devoted to the  
production of journals will be taken up by yet more administrative  
stupidity and heavier teaching loads in other words.

It's worth recalling a fact of the RAE that is all too often  
overlooked in the hysteria that accompanies much of the critique of  
the RQF and the very odd embracement by some of the ERA: journals in  
the RAE are not ranked; individual pieces of research are.  There's  
no doubt that, as Danny notes, the assessment of a piece of research  
by RAE review members is coloured by the perceived esteem of  
establishment journals - and no surprise that most of these are  
nationally/British based. And, through what is effectively structural  
determination, that review group largely consists of conservative  
academics.  But in principle, no journal or publisher is ranked above  
or below any other. The emphasis was on the merits of individual  
pieces of work - 4 submissions over the period of review (2001-2007),  
taking esteem indicators (recognition, influence, benefit) and  
research environment into account.

This does not result in 'indolent researchers', as the ill-informed  
Neville Nicholls put it in a recent HES op-ed. Given time and  
resources, strong researchers will always publish at pretty  
consistent and frequent rates -- their capacity to obtain grants,  
compete for appointments at other universities, attract post-graduate  
students, and hold international relevance in their field depends  
upon ongoing research outputs.

Any preference for the the blanket ranking system of the ERA over the  
peer review system of the RAE/RQF contains a pretty substantial  
contradiction: both, let's not forget, are predicated on 'blind peer  
review'. Granted, the RAE's review of four individual pieces of  
research - research, btw, that is not limited to journal outputs, but  
includes a wide range of research forms (books, exhibitions, films,  
websites, artworks, musical scores and recordings, etc) - is  
conducted by publicly known members of each discipline's review  
committee. But within that group, review is blind as far as  
individual researchers are concerned. Moreover, individual  
researchers are not given a ranking (at least not publicly/beyond the  
review committee); rather the university based research unit (dept,  
school, centre) is given a ranking as a whole. For all its faults,  
the RAE did bestow upon disciplines a degree of autonomy to self- 
determine the criteria by which research was assessed. Moreover, it  
required disciplines to self-organize in order to do this.  It seemed  
to me that the RQF also held this potential, depending on how it was  
implemented.

Another major problem with the ERA: it assumes that all articles -  
whatever their individual merits and 'quality' - in A*/A journals are  
by default of higher quality/are more influential, etc.  than those  
in lower ranking journals.  This grab-all approach really is grounded  
on massive flaws.  So it's fine for editors of and contributors to A*  
and A ranking journals to feel happy about the ranking they've  
obtained, no matter how dubious and methodologically problematic that  
ranking is. Many of us on this list publish or are associated with  
such journals after all.  But it comes at the cost of intellectual  
and disciplinary diversity and innovation made possible through a  
wide variety of publishing outlets and formats.  This, to me, is the  
greatest issue at stake in going along with the ERA.

So even if your lobbying efforts (which merely reinforce an existing  
system rather seriously test it and need to be carefully  
distinguished from 'the political' that underscores the force of  
intervention) within a highly circumscribed review process result in  
your favourite journals being bumped up a notch or two, the end  
result remains the same: intellectual impoverishment.

If senior CS advisers to the ERA had any appreciation of the  
implications of supporting such a disastrous system, they would also  
adopt the position I continue to advocate: refusal.  Any other option  
requires an updating of Donald Horne's The Lucky Country -- The  
Clever Country? Maybe not. And how viable is that?

Ned





More information about the csaa-forum mailing list