The central challenge in bibliometrics is finding the best ways to represent complex constructs like 'quality,' 'impact' or 'excellence' using quantitative methods. The marketplace for bibliometric data and services has… Click to show full abstract
The central challenge in bibliometrics is finding the best ways to represent complex constructs like 'quality,' 'impact' or 'excellence' using quantitative methods. The marketplace for bibliometric data and services has evolved rapidly and users now face quite unprecedented choice when it comes to the range of data now available: from traditional citation-based indicators to reader ratings and Wikipedia mentions. Choice and ease of access have democratised bibliometrics and this is a tool now available to everyone. The era of 'desktop bibliometrics' should be welcomed: it promises greater transparency and the opportunity for experimentation in a field that has frankly become a little jaded. The downside is that we are in danger of chasing numbers for numbers' sake, with little understanding of what they mean. There is a looming crisis in construct validity, fuelled by supply side choice and user-side impatience, and this has significant implications for all stakeholders in the research evaluation space.
               
Click one of the above tabs to view related content.