Despite Becoming Increasing Institutionalised, There Remains a Lack of Discourse About Research Metrics Among Much of Academia
The active use of metrics in everyday research activities suggests academics have accepted them as standards of evaluation, that they are “thinking with indicators”. Yet when asked, many academics profess concern about the limitations of evaluative metrics and the extent of their use.
High-Impact and Transformative Science Metrics: Definition, Exemplification, and Comparison
A novel set of text- and citation-based metrics that can be used to identify high-impact and transformative works. The 11 metrics can be grouped into seven types: Radical-Generative, Radical-Destructive, Risky, Multidisciplinary, Wide Impact, Growing Impact, and Impact (overall).
Does Bibliometric Research Confer Legitimacy to Research Assessment Practice? a Sociological Study of Reputational Control, 1972-2016
A growing gap exists between an academic sector with little capacity for collective action and increasing demand for routine performance assessment by research organizations and funding agencies. This gap has been filled by database providers. By selecting and distributing research metrics, these commercial providers have gained a powerful role in defining de-facto standards of research excellence without being challenged by expert authority.
A simple proposal for the publication of journal citation distributions
Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs. Application of this straightforward protocol reveals the full extent of the skew of these distributions and the variation in citations received by published papers that is characteristic of all scientific journals. Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF. We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.
As of May 2018, CORE has aggregated over 131 million article metadata records, 93 million abstracts, 11 million hosted and validated full texts and over 78 million direct links to research papers hosted on other websites.
A study identifies papers that stand the test of time. Fewer than two out of every 10,000 scientific papers remain influential in their field decades after publication, finds an analysis of five million articles published between 1980 and 1990.