A couple of recent posts at Occam's Typewriter, on journal impact factors and on metrics in general discuss the evaluation of journals, researchers and institutions. In particular, Athene Donald's post links to The Metric Tide, an in-depth evaluation of quantitative indicators.
What I find interesting is that The Metric Tide compares (in Supplementary Report II) various metrics and the REF quality profile, which ranks individual UK researchers in five categories (one to four stars, in increasing order of "originality, significance and rigour" and the bottom drawer, discreetly labeled as "unclassified".)
The authors computed the precision and sensitivity of REF 4* predictions based on each indicator and the Spearman correlation of the indicator with the REF quality profile. In the areas of physics and chemistry, the best predictor seems to be the citation count, with a precision (percentage of correct predictions) of about 50%, a sensitivity (the proportion of REF 4* outputs identified by the metric prediction) of 85% and a correlation of 0.6.
This is fairly imprecise, but the analysis is done over entire thematic fiels (or units of assessment, as they are called in the report). The accuracy would probably improve if the comparison were restricted to sub-fields, which are more homogeneous in terms of audience sizes and citation practices.
What is it clearly missing from the picture (and would be very hard to measure) is the influence of the various metrics themselves on the REF evaluation...
What I find interesting is that The Metric Tide compares (in Supplementary Report II) various metrics and the REF quality profile, which ranks individual UK researchers in five categories (one to four stars, in increasing order of "originality, significance and rigour" and the bottom drawer, discreetly labeled as "unclassified".)
The authors computed the precision and sensitivity of REF 4* predictions based on each indicator and the Spearman correlation of the indicator with the REF quality profile. In the areas of physics and chemistry, the best predictor seems to be the citation count, with a precision (percentage of correct predictions) of about 50%, a sensitivity (the proportion of REF 4* outputs identified by the metric prediction) of 85% and a correlation of 0.6.
This is fairly imprecise, but the analysis is done over entire thematic fiels (or units of assessment, as they are called in the report). The accuracy would probably improve if the comparison were restricted to sub-fields, which are more homogeneous in terms of audience sizes and citation practices.
What is it clearly missing from the picture (and would be very hard to measure) is the influence of the various metrics themselves on the REF evaluation...
No comments:
Post a Comment