Still has all the problems that counting citations do: how do you count multiple-author papers (particle physics has given up ranking authors and just put the hundreds of authors in alphabetical order, IIRC) and it encourages people to write piecemeal papers, because why say something in one paper if you can say it in two -- and get twice the citations?
Also, my favorite: What if you cite someone to say that they were completely wrong when they approached the same problem? Should that be counted as a positive for them?
Presumably, no one is going to try to prove something wrong that has already been proven wrong, so unless there are many obscure flaws in one's work that take multiple papers to uncover, it shouldn't affect the citation count much.
In introductions, papers often give a short overview of what's been done before, so early work in a field often gets cited even if it turns out to have been flawed. (It can't be completely ridiculous, of course, it must have sounded plausible at the time, but even shoddy research often does.)
this is almost pagerank for academic publications, and would probably be improved by being more like pagerank (where it also took into account how often citing papers were cited)
There are a lot of people trying to use pagerank for academic journals, but so far it hasn't worked well for various reasons.
Part of the problem is that the metaphor breaks down: a paper is like an individual webpage, but a journal is like a company -- it has a much longer time-line, and its impact varies over time. Also, unlike web links, citations don't go away; they just accumulate over time. Since the point of these citation metrics are to rate the journals (and maybe the scientists), pagerank has some difficulties in the domain. It works better for ranking individual papers than for scientists or their journals.
This shouldn't be too surprising: TechCrunch (for example) probably has a good rank on many pages, but pagerank doesn't tell us anything about Michael Arrington's reputation.
But we're not talking about ranking journals. We're talking about ranking authors. JIF if a reasonable metric for journals, the problem is that it's used to rate authors: what's the JIF of journals you publish in?
The metric presented here is much better for rating authors because it gives more of an author's peers an opportunity to vouch for him by citing his work, as opposed to only a small editorial board and review committee who decide if he gets into TopJournalX.
Adding a pagerank-style coefficient (increasing the weight of citations that come from well-cited papers) would make this metric even better for precisely the reason you state: papers exist in perpetuity. If I write a paper now but it is ignored for 50 years, then someone builds upon that to break ground in an entirely new field, then I deserve some indirect credit for that. The journal I published in does not.
Empirically, pagerank hasn't been very successful at ranking authors for the reasons I mentioned, along with other complications (e.g. papers have multiple authors).
But more importantly, you're confusing impact factor with peer review. Peer review decisions are double-blind, and impact factor doesn't play a role (shouldn't, anyway). Papers don't get published in Science and Nature based upon the authors' impact factors.
"There are a lot of people trying to use pagerank for academic journals, but so far it hasn't worked well for various reasons."
Apart from eigenfactor.org, what other examples do you know of?
I'm not aware of anyone using PageRank for individual articles. (I know this isn't what you were referring to in your comment).
I'd be interested to know what algorithm Google Scholar uses to compute its rankings. The rankings it returns seem to be pretty close to pure citation counts, with some minor variations, which could potentially be explained as being due to some sort of relevancy of the hit to the query.
Reading past the usual academic exaggeration (where everything is "promising" and "has potential"), the data is underwhelming -- there's no clear indication that pagerank has an advantage over traditional citation metrics.
Here's a Google cache link to a paper that discusses some of the things I was talking about (i.e. how the metaphor breaks down when moving from web to journals):
Presumably they'd do better than with the JIF system. Orthodoxy isn't the problem. He's still getting published & cited. He just can't get grants apparently because of his 1989 conference.
regardless of whether it is the right thing to do, what he is trying to do could be improved by being more like pagerank. i don't have an opinion on whether it is the right thing to do; i haven't given that much thought.
Unfortunately, both systems are too easy to game by those who happen to be more unprofessional than average.
In the end, these sorts of systems are foisted on us by the paid bureaucrat-class that pays itself quite well for doing all that really hard work of managing academics. Figuring out whether someone is a hotshot scientist would mean reading his papers, and that's way too much work.
1) Whether someone is a hotshot scientist is a subjective matter of opinion.
2) The question they are asking is not "How important are your contributions to science?", but "How important do your peers think your contributions to science are?"
I'd probably go as far as to claim that it's nonsensical to search for an objective value metric for contributions to science. Scientific contributions are extremely heterogeneous, and value judgements are equally varied.
Yes. There is no pure objective measurement, but...You still need to hire some people and not hire others. Value judgments are not randomly distributed. Science is a built upon forging objective consensus out of individual opinion. It even works pretty well.
Also, my favorite: What if you cite someone to say that they were completely wrong when they approached the same problem? Should that be counted as a positive for them?