The article makes some interesting points, but overall I don't find it terribly convincing.
--- While they do comment that retractions are just the most extreme version of this problem, the retraction rate of .02% they mention near the beginning does not strike me as a crisis in science.
--- I'd also like to see more acknowledgment that the degree of this kind of problem depends pretty heavily on the field instead of just lumping everything together as "science." Physics has had its share of frauds (e.g J. H. Schon[1]) and certainly puts some weak papers in big journals, but I have found that publication in one of the big journals really is meaningfully correlated to quality on average.
--- Earlier sections of the paper criticize the higher unreliability and lesser quality of papers in high impact journals. Much further on, there's a paragraph or two acknowledging that these papers, by dint of being in these journals, typically have more novel results and receive way more scrutiny than typical papers. Thus it would be unsurprising if the true error rate actually were higher, and certainly one would expect the error detection rate to be much higher for these articles. Further, I expect this meta-result (big papers come up short more) would persist even if you got rid of the journal model like they propose.
--- That last bit is something I don't feel they address at all. Any model of "filter, sort and discovery" is going to produce papers which are more successful/visible than others. The career incentives of scientists will still be essentially the same in a new system. Setting aside the path-dependence problems of switching to a new model (when basically all of the present actors are well calibrated to the old system), I'm very skeptical that their proposed solutions will actually prove less gameable than the current one.
I thought it was a bit stronger than that, the core message I got from it was that "value" of 'rank' offset the cost of cheating. Which is to say that the expected value of getting your paper into a high rank journal offset the expected value of getting caught cheating. This in turn made the science in high rank journals more suspect which is counter intuitive.
While I agree that 'impact factor' is probably not a good thing per se, there are also a lot of really sketchy journals out there, so some sort of rating is useful. Mixing in the tenure computation and if leaves no independent variables to work with.
AFAIK, In economics there's pretty much a consensus of journal ranks amongst researchers.
If you want to be a tenured professor, you'd usually be expected to have at least a certain number of publications in the top journals.
Also, I remember an interesting comparison amongst the Top 10 Econ departments, that compared grad students and the amount of top-journal publications they had 5 years after graduation.
The bottom line was that the Top 5 students in MIT\Harvard really have much more publications than any other grad student. But, the median Harvard\MIT one doesn't perform better than any grad from tier 1 school
--- While they do comment that retractions are just the most extreme version of this problem, the retraction rate of .02% they mention near the beginning does not strike me as a crisis in science.
--- I'd also like to see more acknowledgment that the degree of this kind of problem depends pretty heavily on the field instead of just lumping everything together as "science." Physics has had its share of frauds (e.g J. H. Schon[1]) and certainly puts some weak papers in big journals, but I have found that publication in one of the big journals really is meaningfully correlated to quality on average.
--- Earlier sections of the paper criticize the higher unreliability and lesser quality of papers in high impact journals. Much further on, there's a paragraph or two acknowledging that these papers, by dint of being in these journals, typically have more novel results and receive way more scrutiny than typical papers. Thus it would be unsurprising if the true error rate actually were higher, and certainly one would expect the error detection rate to be much higher for these articles. Further, I expect this meta-result (big papers come up short more) would persist even if you got rid of the journal model like they propose.
--- That last bit is something I don't feel they address at all. Any model of "filter, sort and discovery" is going to produce papers which are more successful/visible than others. The career incentives of scientists will still be essentially the same in a new system. Setting aside the path-dependence problems of switching to a new model (when basically all of the present actors are well calibrated to the old system), I'm very skeptical that their proposed solutions will actually prove less gameable than the current one.
[1] This is a pretty good book on the subject: http://en.wikipedia.org/wiki/Plastic_Fantastic