Tuesday 20 August 2013

H-index explained

We hear a lot about the H-index, sometimes called the Hirsch index or Hirsch number after Jorge E. Hirsch, the guy who made it up.

The h-index attempts to measure both the productivity and impact of the published work of a scholar. So, quality and quantity.

It is sometimes applied to the productivity and impact of a group of scientists (such as a department,  university or country), and sometimes to a scholarly journal.

How is it calculated and what does it actually mean?
 
The h-index:
A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np-h) papers have no more than h citations each.

Well, if you list all of an author's publications from most cited to least cited, and number them, there will be a point where the number in the list is greater than the number of citations for that article, like this:
Paper 6 only has 4 citations, so this researcher's h-index is 5.

There are about a million problems with the h-index. It is still regarded as the single best measure for scholarly impact.
Some of the many things that can make the h-index misleading are:
  • It doesn't take into account very highly cited papers. For example, another scholar could have a h-index of five, but have each of their top 5 papers cited only 5 times!
  • The h-index does not account for the number of authors of a paper. It gets counted as your paper even if you are one of 10 authors.
  • The h-index does not account for the typical number of citations in different fields.
  • The h-index means that scientists with a short career (fewer publications) are at a disadvantage, no matter how influential one paper might be
  • The h-index does not consider the context of citations
  • The h-index can be manipulated through self-citations
To compensate for these flaws (and more) a whole alphabet of indexes have been introduced to take into account things like very highly cited papers, early career researchers' impact, etc.

Next time we will look at some of these, and how they might be useful to researchers whose h-index is not so flash on its own.



Thursday 15 August 2013

Journal Impact Metrics for Dummies...

Before I even start -  Journal level metrics are not an accurate measure of journal quality.
What they, are, though, is a useful measure of journal impact and prestige... so we need to take them into account when considering publishing anything.



JIF - Journal Impact Factor – “measures” how often articles in journals are cited.

Or, the average number of citations in that year that a paper published in a particular journal in the previous two years receives.

E.g. the 2010 IF is the average number of citations received in 2010 for 2008 and 2009 papers
  • JIF  can be found using Thomson Reuters Journal Citation Reports

 
SNIP - Source Normalised Impact per Paper – “measures” contextual citation impact by weighting citations based on the total number of citations in a subject field


Citation potential is shown to vary not only between subject categories or disciplines but also between different “types” of journals within the same subject category.

E.g. basic journals vs. applied/clinical journals
  • SNIP can be found using Scopus Journal Analyzer

And my personal favourite...
SJR - Scientific Journal Ranking– “measures” the prestige of a journal based on which journals have cited from it, and which journals it cites (and how many times this occurs)


“…based on the transfer of prestige from a journal to another one; such prestige is transferred through the references that a journal do to the rest of the journals and to itself.”


Stay tuned for more Basic Bibliometrics for Librarians! Baby-steps, baby-steps...

Next post... "H-index for Dummies"

Beginners' Bibliometrics?

Are you wondering how bibliometrics affect academics and institutions? Would you like to help create publication plans, impact reports, and more?
Introducing the new series...

Basic Bibliometrics for Librarians! (or Bibliometrics in baby-steps)

There are so many people wanting to know more about Bibliometrics, Impact, and other research support services, that I thought I would start a series of posts with some very short quick  intros to, well, some stuff.

I have learned so much over the past few months I just thought I'd put a few things out there for others. Let me know if there is actually any interest... If you have suggested topics, post them in the comments and I'll start scheduling some future posts.

So, what are Bibliometrics? 

Bibliometrics are methods of statistically analysing information. That's all.

Increasingly, bibliometrics are being used as a measure of research impact or research influence. This can affect ranking and funding of authors and of institutions. That's why we need to know about it.


A common example of bibliometrics is the use of citation analysis - for example - how many times a researcher's work has been cited in key literature.  
Citation analysis is used in searching for materials and judging its quality. 

Some data that is used for citation measurement includes:
  • Number of times an author is cited
  • Number of times an article is cited
  • Number of articles published
  • Number of articles published in a journal each year
  • Number of journals in a subject area
  • Half-life of journals
  • Cited half-life of journals
Citation measurement is not perfect because:
  • No single data source is comprehensive. 
  • Publication dates can affect results.
  • Frequency of a journal can also affect results.
  • An article may be cited because it is really dodgy.
  • Commercial products used for citation counts do not consider website sources, repositories or open source resources.
  • Some articles might be widely read by individuals who never publish.
  • Only a small number of articles are highly cited and these are found in a small number of journals (and fields)
Thanks to Macquarie University Library for some of this info...

Next post will be.... Journal Impact for Dummies.