Category Archives: Peer Review

These commentaries are being collected in advance of the Transparency, Recognition, and Innovation in Peer Review in the Life Sciences meeting

How to launch a transformative and sustainable forum for publication and scholarly critiques of research in the life sciences?

By Harinder Singh
Director, Division of Immunobiology
and the Center for Systems Immunology
Cincinnati Children’s Hospital Medical Center

This perspective is a result of the various insightful commentaries that have been posted on the ASAPbio site in the context of the HHMI/Wellcome/ASAPbio meeting on “Transparency, Recognition and Innovation in Peer Review in the Life Sciences.” The framework sketched below is an attempt to synthesize various ongoing efforts as well as intriguing ideas. It is clear from the various commentaries that our shared intent is to strengthen and accelerate research in the life sciences through transformation of the publication landscape. Continue reading

Early Career Researchers and their involvement in peer review

By Gary McDowell, Future of Research

When it comes to peer review and the role that Early Career Researchers (ECRs) play in it, I am of course reminded of the immortal words of Steve McKnight in his President’s Message at the American Society for Biochemistry and Molecular Biology (ASBMB, emphasis mine):

the average scientist today is not of the quality of our predecessors; it’s a bit analogous to the so-called “greatest generation” of men and women of the United States who fought off fascism in World War II compared with their baby boomer children. Biomedical research is a huge enterprise now; it attracts riff-raff who never would have survived as scientists in the 1960s and 1970s. There is no doubt that highly capable scientists currently participate in the grant-review process. Likewise, unfortunately, study sections are undoubtedly contaminated by riff-raff.

Continue reading

In support of journal-agnostic review

By Vivian Siegel

In my own experience, and I’ve written about this in the past, peer review in the context of journal submission suffers from a number of biases. These include journal-based biases that would be eliminated by a journal agnostic process. I summarize the main points below.

We all know reviewers are biased (why? Because we’re human).

We’re biased based on our own history – how our own experiments have led to our view of the way a particular phenomenon works – and are much more skeptical of experiments that go against our views than we are of those that align with them. To the editors who expect a controversial paper to be enthusiastically endorsed by reviewers on both sides of the controversy: good luck. Continue reading

Preprint QC

By Bernd Pulverer, EMBO

As preprint posting takes hold in the biosciences community, we need both quality control and curation to ensure we share results in a reproducible and discoverable manner

The EC has taken the bold step – at least on paper – to proclaim a Europe that is by 2020 to be  ‘Open Innovation’, ‘Open Science’ and, thankfully, ‘Open to the World’. The Open Science piece includes an ongoing project ‘The Open Science cloud’ as well as a dedicated online publishing/preprint platform – details pending at the time of writing. The promise of Open Science is of course no less than to dramatically increase the efficiency of the scientific process. Given the speed of these developments, now is the time to face up to a key issue in the life sciences: the question of trust and quality of data shared by Open Science mechanisms. Clearly, a 24/7 release of raw data by all labs will rapidly clog up even the most powerful repositories, while providing only limited benefit to the community. Without the provision of stable, structured databases and repositories that are curated and quality-controlled, we risk sinking in a swamp of data that will be largely ignored. Continue reading

Advancing peer review

By Elizabeth Moylan, Senior Editor, Peer Review & Innovation, BMC (part of Springer Nature)

At BMC, we’ve always supported innovation in peer review and were one of the first publishers to truly open up peer review in 1999. Fiona Godlee, then Editorial Director for BMC, explained the reasons for this decision, including ethical superiority (reviewers are accountable for their decisions and there is less scope for biased or unjustified judgements or misappropriate of data), lack of important adverse effects, feasibility in practice and recognition for reviewers. However, no one model of peer review is perfect, and the drawbacks to open peer review are that it can increase the number of reviewers that decline to review and increase the time taken to produce a report. Continue reading

Preprint Journal Clubs: building a community of PREreviewers

By Samantha Hindle and Daniela Saderi, PREreview

The image above (DOI)  is CC-BY 4.0 licensed and is available for download on Figshare.

Preprints are freely available scientific manuscripts that have not yet undergone editorial peer review. They provide data and knowledge that is current, accessible by all, and at a stage where community peer review can contribute to scientific progression. Rather than restricting feedback to two or three journal-selected reviewers, preprints can be read and evaluated by a diverse population of interested scientists at different career stages. Theoretically, the advantages of opening up scientific evaluation to a larger pool of scientists should be straightforward: the more reviewers, the fewer mistakes – or to quote Linus’ Law, “Given enough eyeballs, all bugs are shallow.” Practically speaking, this can be more complicated as scientists have limited free time, are not well-incentivized for their reviewing activities, and some may argue that “too many cooks spoil the broth.” Continue reading

On the Need for Editorial Standards

By Damian Pattinson, Research Square


The term ‘peer review’ has come to mean any assessment performed on a manuscript prior to publication. So for any paper submitted to a journal, it ‘undergoes peer review’ and it is published. But in actual fact, the act of preparing a manuscript for publication requires a huge number of checks, clarifications, tweaks and version changes, from many different parties. But because of the tradition of confidentiality during the editorial process, much of this work has gone unnoticed by the academic community.

The checks a journal performs depend upon its editorial policies, and these policies have helped shape research practice in very positive ways over the years – things like mandatory trial registration, ethical approval for animal research, and deposition of sequence and expression data have only become standard practice because of strict journal policies. But the question of how these policies are policed is not a straightforward one. There has never been a full examination of Who Should Check What in a research manuscript, and this has led to confusion on the part of the reviewers and readers about what has been assessed and what has not. In this article I break apart the assessment process and propose a way in which all the relevant pieces can be assessed, be it on a journal or a preprint. Continue reading

New Forum for Peer Reviewed Research in the Biomedical Sciences

By Harinder Singh, Division of Immunobiology and the Center for Systems Immunology, Cincinnati Children’s Hospital Medical Center


Although major research advances are rapidly being made in the biological and biomedical sciences, the communication of these findings is hampered by existing publication forums. Despite the large and expanding number of journals, there are considerable limitations, including the cumbersome nature of the process. This often involves the review of a manuscript by multiple journals, resulting in substantial delays. More importantly, cursory or biased reviews and non-deliberative or capricious editorial judgements compound the problem. Finally, scholarly reviewers that provide critical context and interpretation of the findings are not being appropriately recognized for their valuable insights and perspectives. Thus, there is an acute need to establish new publication forums that promote equal accountability as well as reward to authors and reviewers and nurture deliberative yet timely announcement of scientific findings. Continue reading

F1000: our experiences with preprints followed by formal post-publication peer review

By Rebecca Lawrence & Vitek Tracz, F1000,

We have been successfully running a service (which we call platforms, to distinguish from traditional research journals), for over 5 years at F1000 that is essentially a preprint coupled with formal, invited (i.e. not crowd-sourced) post publication peer review. We have consequently amassed significant experience of running such a system. We summarise here the approach we have taken, and share some data on our experiences, and what we have learnt along the way, in order to support the discussions within the ASAPBio community and elsewhere.

We have developed and operated this model on F1000Research since 2013, and now operate the same model as a service to a number of funders and institutions, including Wellcome, the Bill & Melinda Gates Foundation, the Irish Health Research Board, the Montreal Neurological Institute and many others. The platforms are controlled by these organisations who provide them as a service to their grantees and researchers. Continue reading

It’s time to open the black box of peer review

By Jessica Polka and Ron Vale, ASAPbio

Photo CC BY-NC by Premnath Thirumalaisamy

Opening the content of peer review reports—whether they are anonymous or not—will improve their quality, ensure that ideas that emerge through review are accessible to other researchers, and enable innovation and reform.

Peer review is considered an essential standard of scientific publishing. Despite complaints, most scientists feel that peer review remains a valuable part of the scientific process. Yet, for something deemed so essential and valuable to the process of establishing scientific credibility, it is surprising that it is disposed of once a paper has been accepted for publication. Some journals (EMBO, BMJ, eLife, PeerJ, and F1000Research, among others) are displaying the content of peer reviews alongside the published paper (termed “open reports” using Tony Ross-Hellauer’s taxonomy). However, they are in the minority; only 2.2% of life science journals publish open reports. Furthermore, even in cases where peer reports are open, they are not currently not easily searchable.

Here we present the benefits and disadvantages of open reports. In comparing the two, we believe that the benefits outweigh the disadvantages and that the practice of releasing open reports should expand to become an industry-wide standard. Continue reading

Peer review as practised at Wellcome Open Research: analysis of Year 1

By Robert Kiley, Head of Open Research, Wellcome


In November 2016 Wellcome became the first research funder to launch a publishing platform for the exclusive use of its grantholders. Wellcome Open Research (WOR), run on behalf of Wellcome by Faculty of 1000 (F1000), uses a model of immediate publication followed by invited, post-publication peer review. All reviews are citable and posted to the platform along with the identities of the reviewers.

This short blog post discusses the motivations behind establishing this publishing platform, and presents some data about the peer review process, as practised at WOR. Continue reading

Scientific Publishing in the Digital Age

By Bodo M. Stern and Erin K. O’Shea
Howard Hughes Medical Institute
Chevy Chase, Maryland


Life scientists feel increasing pressure to publish in high-profile journals as they compete for jobs and funding. While academic institutions and funders are often complicit in equating journal placement with impact as they make hiring and funding decisions, we argue that one of the root causes of this practice is the very structure of scientific publishing. In particular, the tight and nontransparent link between peer review and a journal’s decision to publish a given article leaves this decision, and resulting journal-specific metrics like the impact factor, as the predominant indicators of quality and impact for the published scientific work. As a remedy, we propose several steps that would dissociate the appraisal of a paper’s quality and impact from the decision to publish it. First, publish peer reviews, whether anonymously or with attribution, to make the publishing process more transparent. Second, transfer the publishing decision from the editor to the author, removing the notion that publication itself is a quality-defining step. And third, attach robust post-publication evaluations to papers to create proxies for quality that are article-specific, that capture long-term impact, and that are more meaningful than current journal-based metrics. These proposed changes would replace publishing practices developed for the print era, when quality control and publication needed to be integrated, with digital-era practices whose goal is transparent, peer-mediated improvement and post-publication appraisal of scientific articles. Continue reading

APPRAISE (A Post-Publication Review and Assessment In Science Experiment)

By Michael B. Eisen

Investigator, Howard Hughes Medical Institute
Professor of Molecular and Cell Biology, UC Berkeley, @mbeisen

With the rapid growth of bioRxiv, biomedical research is entering a new era in which papers describing our ideas, experiments, data and discoveries are made available to our colleagues and the public without having undergone peer review. This inversion of the traditional temporal relationship between peer review and publication is good for science and anyone who cares about or benefits from its accomplishments. But it also has rendered the existing infrastructure we have for carrying out peer review – scientific journals – obsolete.

It is incumbent upon to scientific community to seize this opportunity, reinventing the ways we assess our research outputs and each other to make them more fair, efficient and effective. This will require new ideas and a lot of experimentation. In that spirit I describe here a new project – called Appraise – that is both a model and experimental platform for what peer review can and should look like in a world without journals. Continue reading

Opening up peer review

By Dr. Stuart Taylor, Publishing Director, The Royal Society, London, UK

Peer review has been a key part of the research communication system for centuries. Scientists absolutely depend on a research literature that is as reliable, reproducible and trustworthy as possible in order to inform their future work and to help explain other findings. Subjecting results, discoveries and theories to rigorous scrutiny is an essential part of this error-checking and self-correction process, as is the system of retraction in cases where findings have later been found to be unreliable. Continue reading

Hypercompetition and journal peer review

By Chris Pickett

Journal peer review is a critical part of vetting the integrity of the literature, and the research community should do more to value this exercise. Biomedical research is in a period of hypercompetition, and the pressures of hypercompetition force scientists to focus on metrics that define success in the current environment—funding, publications and jobs. But it also means that other activities that are critical for research but indirectly linked to success in this environment, like peer review, mentoring and teaching, take a back seat.

Incentives must be developed that ensure that quality journal peer review is valued even in a hypercompetitive environment. Some journals do offer a benefit for serving as a reviewer, but the need to publish in the most visible, highest quality journal possible in this hypercompetitive environment often outweighs the benefits provided by a journal for serving as a reviewer. Therefore, the incentives should not come from journals. Continue reading

In Defence of Peer Review

By Tony Hyman and Ron Vale

Max Planck Institute of Molecular Cell Biology and Genetics, Dresden Germany ( and the University of California, San Francisco, United States  (

Rapid changes in communication technology have led to sea changes in publication. The days when John Maddox (1) joined Nature and found submitted manuscripts sitting in piles forgotten on the floor have long gone, and no longer do we receive publications as bound printed volumes.  Those lucky enough to be part of a rich institution that can pay subscription fees have instant access to most of the world’s literature at their fingertips. However, there is one aspect of publication that has remained essentially unchanged over the past half-century: peer review. Journals send out scientific papers to two or three experts who provide critiques of the work. These critiques are used to alert authors and editors to possible mistakes, flaws in interpretation, or lack of clarity in presentation; meanwhile, editors use these critiques to judge whether a paper is suitable for publication in their journal. Granted, papers are now transferred to referees by web interfaces rather than by the post, but why in these days of the internet, with pervasive online, crowd-sourced commentary, has technology not had more impact on peer review? Is the premise that expert peer review is an essential part of scientific dissemination still a valid one, or does peer review persist for reasons of inertia alone? Here, we argue for the former, while acknowledging the need for fresh thinking in the peer review paradigm. Continue reading

Six essential reads on peer review

In preparation for our meeting on Transparency, Recognition, and Innovation in Peer Review in the Life Sciences on February 7-9 at HHMI Headquarters, we’ve collected some recent (and not-so-recent) literature on journal peer review. A full annotated bibliography can be found at the bottom of this post, and we invite any additions via comments. To make the list more manageable, we’ve highlighted some of the most crucial content here. Continue reading

Should reviewers be expected to review supporting datasets and code?

by John Helliwell, Emeritus Professor of Chemistry University of Manchester and DSc Physics University of York (@HelliwellJohn)


For the meeting entitled “Transparency, Reward, and Innovation in Peer Review in the Life Sciences” to be held on Feb. 7-9, 2018 at the Howard Hughes Medical Institute in Chevy Chase, Maryland ( I have been asked by The Wellcome Trust to open the discussion on the question in my title.

In my view peer reviewing research article submissions to journals is arguably one of the most important roles we scientists play.  Through this process we seek to improve the research of our peers, highlighting errors and omissions and work to ensure that scientifically flawed research does not get published.  To perform this work effectively however – especially in our new data-driven age – it is crucial that peer reviewers are given unfettered access to the data and code underlying the research we are reviewing.  Unfortunately, while many journals provide access to this data after an article’s publication,* most journals do not provide access to this material during the refereeing process, making it almost impossible to perform an effective peer review function.

In this blog post I will discuss why peer review of the  underpinning data of a research article is important – using examples from the my field of crystallography – and outline some steps which funders and publishers could take to implement peer review of data.  Continue reading

Should scientists receive credit for peer review?

by Stephen Curry, Professor of Structural Biology, Imperial College (@Stephen_Curry)

As the song goes – and I have in mind the Beatles’ 1963 cover version of Money (that’s all I want) – “the best things in life are free.” But is peer review one of them? The freely given service that many scientists provide as validation and quality control of research papers submitted for publication has its critics. Richard Smith, who served as the editor of the British Medical Journal from 1991 to 2004, considered peer review to be “ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant.” Although my own experience, and that of many colleagues, is that peer review mostly provides valuable clarification and polishing of submitted manuscripts, Smith is worth listening to because there are growing concerns about the inability of peer review to provide a sufficient test of the integrity of the scientific record. That trend should worry everyone involved in scholarly publication. Continue reading