Average rating: | Rated 4 of 5. |
Level of importance: | Rated 5 of 5. |
Level of validity: | Rated 3 of 5. |
Level of completeness: | Rated 3 of 5. |
Level of comprehensibility: | Rated 5 of 5. |
Competing interests: | I work for ScienceOpen, a platform which employs peer review by endorsement and post-publication peer review models |
General Comments
This article by J. Velterop represents a valuable review of the limitations of traditional peer review, while providing a potential innovative solution. In some places, it could do with additional evidence supporting the limitations of the current peer review process, in order to support the justification for a new model.
Abstract
This is generally concise and a good overview. However, it doesn’t mention peer review by endorsement, which is the point of the whole paper as the solution to traditional models.
Introduction
For an introductory section on peer review, a vastly studied topic, this could do with additional references to define the context of this manuscript, and strengthen the idea that something is broken about traditional peer review, as this seems to imply. There are also many different modes of peer review in the traditional model. It should made very explicit about the process that is being discussed within this paper, which will be especially important to those not familiar with the process.
Slow
There has been recent exposure of the delay in publication times at a range of journals: http://blog.dhimmel.com/plos-and-publishing-delays/. Is there similar data regarding the temporal duration of peer review for different journals, publishers, and research communities? Data is needed to demonstrate that the traditional method of peer review is inferior to any proposed model, or that slowness of peer review has a detrimental impact on the scholarly publishing process. A simple counter-argument could be that a quality peer review needs to take a long time; both sides could be supported equally weakly without data.
Does the slowness of peer review have a greater impact at different levels in academia? For example, are PhD students/early career researchers with fewer publications at a greater risk from slower publication processes? Do pre-prints help to absolve some of this?
Inefficient
As above, is there any data regarding the number of submissions required on average prior to publication, and what the detrimental impact of this might be? It might be worth stating why rejection rates are so high at ‘higher impact’ journals, and how this drives any inefficiency. It should also be noted that often manuscripts are seen by the same reviewers at different venues, with or without changes recommended from peer review from different submissions. Not only is this evidence of ‘playing the game’ by researchers, but that this ‘down the ladder’ approach can lead to articles that fail peer review in one journal passing peer review at another while not accommodating the issues originally leading to rejection.
Unreliable
Richard Horton in this case appears to be citing a paper by John Ioannidis: http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124, which is worth citing beyond an editorial comment. The ‘pecking order’ is also defined by the reputation and ‘prestige’ of a journal along with the impact factor. Research showing that confirming to this order defined by journal rank has detrimental effects on research should be cited (http://journal.frontiersin.org/article/10.3389/fnhum.2013.00291/full).
Highly variable
Variability is supposed to be a factor controlled by the editorial process. However, who is supposed to monitor this and make sure it is unbiased is another matter concerning this issue. It arises by making the peer review process exclusive, subjective, and secretive, all factors that lead to variability, a lack of accountability, and difficult to manage. There have been studies showing that peer review often fails to detect the most ‘impactful’ research based on randomised controlled trials, and some of this research should be included to support statements in this section.
Ineffective
Does any data exist showing the prevalence of misconduct in the traditional peer review process?
Arbitrary
Is there any updated data on this beyond 1982? I would be surprised if not! It might be worth noting how peer review ‘rings’ are also facilitated by the present system, as well as how John Bohannon’s ‘sting’ operation revealed much about the disconnect between review recommendations and editorial decisions. Have there been any additional studies that have attempted to replicate the quality of peer review?
Undermining scientific scepticism
I think this is an important and often over-looked point to make. Research articles are never the final statement in a matter. If they were, researchers would never write that ‘more research is needed’, and there would never be any more grants. Therefore, peer reviewed articles should be rarely or never considered as ‘truth-bearing’. It would be interesting to see if there is any research concerning the public perception of ‘truth’ compared to peer reviewed articles. One could take the argument a step further, and suggest that if researchers treat peer reviewed articles with validity, they forego their duty as researchers to be critical about the evidence in which they base their research on. Even reading papers is a form of ‘peer review’, and it is imperative that the research community treat research as a continuous process, and remain sceptical at all stages of publication and communication of research.
Confirmation-biased
This section could be used to make a statement about ‘negative research results’. I would argue that there are no such thing as ‘negative results’ – they only reason they appear negative is because they don’t fit your prior expectations (i.e., bias), support the conclusions you wanted, or the narrative of your research. None of these are objectively negative, but perhaps considered best as ‘alternative’. It is imperative that alternative results are published, and not filtered out by the publishing process. This will help to reduce wastefulness and redundancy of research, as well as reduce the need to manipulation of results in order to fit expectations. Things like registered reports, non-selectivity of results (i.e. relaxed editorial criteria), and a change in the culture of academia, are required in order to remove the impacts of such confirmation biases outlined, and could be commented on more.
Putting careerism before science
There’s a typo in the quote “perish’hhas”.
Is there evidence that such careerism has a detrimental effect on scholarly publishing? Does competition enhance or prohibit research? Are there any data on these things?
Expensive
The cost of peer review is somewhat glossed over. I also don’t think the calculation is particularly valid. There are additional marginal costs, and those external to technical preparation such as marketing that need to be factored in. On page 6 here (http://www.rin.ac.uk/system/files/attachments/Activites-costs-flows-report.pdf), it is suggested that peer review costs £1.9 billion per year, bearing in mind that little to none of this goes to the peer reviewers. It might be worth calculating how much of this could be saved, if much more efficient peer review management systems are employed, such as those at Discrete Analysis or the Journal of Machine Learning Research which cost almost nothing to manage. These costs are considerably lower than traditional subscription-based or hybrid journals, as noted here: http://f1000research.com/articles/5-632/v1 (apologies for the self-citation, but we include much discussion on the costs of publishing).
Quo vadis?
This provides a nice summary and wrap-up of the previous sections.
Peer review by endorsement
I think this section could do with a re-write, to explicitly address the issues outlined in each of the previous subsections. This will make it much easier to read, and highlight the importance of PRE in transforming peer review. It might also be worth noting how other journals might go about establishing this system, or at least running a pilot test to see if it works.
Other comments
This paper is generally well-written and is a valuable contribution to the published record and ongoing discussions about the evolution of peer review. However, I feel there are sections were more evidence is required, that certainly exists, in order to strengthen the arguments within.
I am clearly biased based on my position at ScienceOpen, but as this system was established before my employment, I am commenting purely from my perspective as a researcher interested in the development of peer review, and have attempted to keep any such bias minimal. As such, I have attempted to focus my comments on addressing why the need for what might be viewed as a radical transformation is so important, based on the underlying evidence regarding problems with traditional peer review.