Author Archives: Jessica Polka

Hypercompetition and journal peer review

By Chris Pickett

Journal peer review is a critical part of vetting the integrity of the literature, and the research community should do more to value this exercise. Biomedical research is in a period of hypercompetition, and the pressures of hypercompetition force scientists to focus on metrics that define success in the current environment—funding, publications and jobs. But it also means that other activities that are critical for research but indirectly linked to success in this environment, like peer review, mentoring and teaching, take a back seat.

Incentives must be developed that ensure that quality journal peer review is valued even in a hypercompetitive environment. Some journals do offer a benefit for serving as a reviewer, but the need to publish in the most visible, highest quality journal possible in this hypercompetitive environment often outweighs the benefits provided by a journal for serving as a reviewer. Therefore, the incentives should not come from journals. Continue reading

ASAPbio newsletter vol 11 – Peer review meeting, peer review service proposal, welcome new board members

Dear subscribers,

Hope 2018 is off to a great start for you! We have a few exciting announcements:

Save the date: Transparency, Recognition, and Innovation in Peer Review in the Life Sciences

On February 7-8, tune in to to watch a webcast of a meeting we’re co-hosting with HHMI and Wellcome Trust on how we can modernize and improve peer review. We want to engage as much of the community as possible through several routes:

  1. Authors, reviewers, editors, publishers: please take our general survey on peer review! This will help to shape discussion at the meeting.
  2. Read commentary on peer review here. Interesting in writing your own? Please email
  3. Join in on the discussion with #bioPeerReview before, during, and after the meeting!

Peer Feedback: a proposal for a journal-agnostic peer review service

Ahead of the meeting, we’ve put forth a proposal for a scientist-driven, journal-independent peer review service called Peer Feedback. You can read the full proposal here, and we welcome your feedback via email, in the comments of the blog post, or on social media (@ASAPbio_ on Twitter). We’re also collecting feedback from authors and reviewers, especially in biology, in our survey on Peer Feedback. If this describes you, please fill out both this survey and the more general one above.

Welcome new board members!

Prachee Avasthi

Heather Joseph

We’re thrilled to welcome two new members to ASAPbio’s board of directors: Heather Joseph, Executive Director of SPARC and Prachee Avasthi, Assistant Professor at University of Kansas Medical Center!

Heather is a strong advocate for open access, data, and educational resources, and brings deep experience with advocacy, coalition building, and the publishing industry to our board.

Prachee is a leader among early career researchers (she’s the founder of New PI Slack) and a strong proponent of preprints and their use in innovative ways, for example as the foundation of preprint journal clubs.

We’re looking forward to working with them!

Jessica Polka
Director, ASAPbio

In Defence of Peer Review

By Tony Hyman and Ron Vale

Max Planck Institute of Molecular Cell Biology and Genetics, Dresden Germany ( and the University of California, San Francisco, United States  (

Rapid changes in communication technology have led to sea changes in publication. The days when John Maddox (1) joined Nature and found submitted manuscripts sitting in piles forgotten on the floor have long gone, and no longer do we receive publications as bound printed volumes.  Those lucky enough to be part of a rich institution that can pay subscription fees have instant access to most of the world’s literature at their fingertips. However, there is one aspect of publication that has remained essentially unchanged over the past half-century: peer review. Journals send out scientific papers to two or three experts who provide critiques of the work. These critiques are used to alert authors and editors to possible mistakes, flaws in interpretation, or lack of clarity in presentation; meanwhile, editors use these critiques to judge whether a paper is suitable for publication in their journal. Granted, papers are now transferred to referees by web interfaces rather than by the post, but why in these days of the internet, with pervasive online, crowd-sourced commentary, has technology not had more impact on peer review? Is the premise that expert peer review is an essential part of scientific dissemination still a valid one, or does peer review persist for reasons of inertia alone? Here, we argue for the former, while acknowledging the need for fresh thinking in the peer review paradigm. Continue reading

Six essential reads on peer review

In preparation for our meeting on Transparency, Recognition, and Innovation in Peer Review in the Life Sciences on February 7-9 at HHMI Headquarters, we’ve collected some recent (and not-so-recent) literature on journal peer review. A full annotated bibliography can be found at the bottom of this post, and we invite any additions via comments. To make the list more manageable, we’ve highlighted some of the most crucial content here.

1. A multi-disciplinary perspective on emergent and future innovations in peer review

Tennant JP, Dugan JM, Graziotin D et al. A multi-disciplinary perspective on emergent and future innovations in peer review [version 2; referees: 2 approved]. F1000Research 2017, 6:1151 (doi: 10.12688/f1000research.12037.2)

This massive review from Jonathan Tennant et al represents a comprehensive discussion of the origin, evolution, and challenges surrounding modern peer review. Of particular note is Section 1, which offers a history of peer review and an overview of critiques of the system, and Section 3, which discusses how platforms like GitHub, Reddit, Amazon, and Stack Overflow could catalyze innovations in scholarly peer review.

“Figure 2. A brief timeline of the evolution of peer review: The revolution.
See text for more details on individual initiatives. The interactive data visualization is available at, and the source code and data are available at “


2. What is open peer review? A systematic review

Ross-Hellauer T. What is open peer review? A systematic review [version 2; referees: 4 approved]. F1000Research 2017, 6:588 (doi: 10.12688/f1000research.11369.2)

The very term “open peer review” means many different things to different people. It can refer to revealing reviewer identities or the content of their reviews, allowing reviewers to discuss the article with one another or the author, or a process in which reviewers don’t need to be invited in order to offer feedback. Indeed, Tony Ross-Hellauer has identified 122 definitions for open peer review.

Fortunately for us, he’s also distilled a schema of seven “traits” of open peer review:

  • Open identities: Authors and reviewers are aware of each other’s identity
  • Open reports: Review reports are published alongside the relevant article.
  • Open participation: The wider community are able to contribute to the review process.
  • Open interaction: Direct reciprocal discussion between author(s) and reviewers, and/or between reviewers, is allowed and encouraged.
  • Open pre-review manuscripts: Manuscripts are made immediately available (e.g., via pre-print servers like arXiv) in advance of any formal peer review procedures.
  • Open final-version commenting: Review or commenting on final “version of record” publications.
  • Open platforms (“decoupled review”): Review is facilitated by a different organizational entity than the venue of publication.

We’ll use this taxonomy throughout our discussions on this blog and at the meeting.

3. Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers.

Ross-Hellauer T, Deppe A, Schmidt B (2017) Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers. PLOS ONE 12(12): e0189311.

The taxonomy identified above was used in an online survey completed by over 3,000 scholars, 95.5% of whom authored an academic communication and 87.6% of whom had experience as reviewers. While the respondent pool skewed heavily toward earth and environmental sciences (41.6%), 14.6% of respondents were biologists and 14.5% were in the health sciences. Perhaps because this was an opt-in survey conducted by an organization devoted to open access, the respondents sampled were a bit more cynical (but not dramatically so) about the current peer review process than people who had responded to a previous study conducted by Mark Ware for the Publishing Research Consortium.

One highlight of the report is Figure 8, reproduced below, which reports attitudes toward the seven traits of open peer review identified in Ross-Hellauer’s F1000 paper. Respondents had largely positive views toward open interaction, open reports, and open commenting. Interestingly, all of the “traits” save open identity were regarded more favorably than open pre-review manuscripts (ie preprints)!

Given the growth of preprints in the life sciences and other disciplines over the last few years, these data suggest that peer review might be ripe for even more radical change.

“Figure 8: “Will ”X” make peer review better, worse, or have no effect?”.” CC BY

4. Peer reviews are open for registering at Crossref

Lin, Jennifer (2017)  Peer reviews are open for registering at Crossref.

During Peer Review Week this fall, Crossref—the non-profit organization that issues DOIs for scholarly journal articles (and preprints)—announced that it will also offer a type of DOI specifically for preprints. Registration opened in late October, and the accompanying documentation suggests interesting and helpful use cases. For example, “contributors” to the review can include (in addition to regular “reviewers”) assistant reviewers, stats reviewers, or translators, and all may be anonymous. The metadata contains fields to keep track of whether the review occured pre- or post- publication, the reviewing round, and the license under which the review is released. And the review can be linked to the object being reviewed (which doesn’t need to be a journal article).

5. Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial

van Rooyen Susan, Delamothe Tony, Evans Stephen J W (2010). Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial BMJ; 341 :c5729

One barrier to driving change in peer review is that comparisons between different systems are often clouded by confounders such as different author and reviewer pools across journals or across time (in cases where policy has changed).

The BMJ set a high standard for rigor in studies of peer review by conducting a series of randomized controlled trials in the early 2000’s. In this one, members of the intervention group had their signed reviews posted online, while members of the control group did not. The authors found that:

“Telling peer reviewers that their signed reviews might be available in the public domain on the BMJ’s website had no important effect on review quality. Although the possibility of posting reviews online was associated with a high refusal rate among potential peer reviewers and an increase in the amount of time taken to write a review, we believe that the ethical arguments in favour of open peer review more than outweigh these disadvantages.”

In 2014, the BMJ announced that all research papers would be published with signed peer review reports, arguing that

“Such open peer review should increase the accountability of reviewers and editors, at least to some extent. Importantly, it will also give due credit and prominence to the vital work of peer reviewers. At present, peer review activities are under-recognised in the academic community. We hope that reviewers will find this increased visibility helpful when demonstrating the extent and impact of their academic work and that they and others will cite and share their reviews as a learning resource.”

They came to this conclusion in spite of their acknowledgement that it might “provide “more scope for power relationships to favour ‘the great and the good,’”” as pointed out by as Karim Khan.

6. Why we don’t sign our peer reviews

Yoder, Jeremy (2014). Why we don’t sign our peer reviews.

As the authors of the BMJ editorial have acknowledged, making peer review (and especially the names of peer reviewers) transparent can complicate the human relationships that make up science, especially for reviewers who are junior or otherwise marginalized or vulnerable.

To get at some of these issues, Jeremy Yoder has compiled short pieces written by members of his community to explain why they don’t sign peer review.

One anonymous postdoc writes,

“I chose to be anonymous because I am relatively junior, although I am not sure at what point I would consider myself sufficiently senior to change my approach. I expect that my comments might be not taken as seriously if my position (and gender) were known. I also expect that my own comments will be slightly less critical if I reveal my name. Science, and peer review, are political processes and to claim otherwise (i.e., that authors and reviewers will act exactly the same without the protection of reviewer anonymity) seems out of step with the reality.”

Personal connections are also important. Tony Gamble adds that it is especially difficult to reject manuscripts written by friends and colleagues, and Will Pearse has the opposite concern:

“My concern is appearing sycophantic when I enjoy a paper, I’m actually not worried about more negative reviews because I’m always polite and constructive.”

Given these concerns, what benefits are gained by removing or preserving anonymity in peer review? And is posting review reports (without reviewer names) similarly complicated by concerns for protecting authors?

We’ll discuss this (and more) on February 7-9. Tune in to the webcast and join the discussion on Twitter with #bioPeerReview.


Further reading

Compiled with Caitlin Schrein, HHMI

Download BibTex


Bornmann, Lutz, Rüdiger Mutz, and Hans-Dieter Daniel. “A Reliability-Generalization Study of Journal Peer Reviews: A Multilevel Meta-Analysis of Inter-Rater Reliability and Its Determinants.” PLOS ONE 5, no. 12 (December 14, 2010): e14331.

Meta-analysis suggesting that there is low inter-rater reliability between reviewers (Cohen’s Kappa: 0.17).

Glonti, Ketevan, Daniel Cauchi, Erik Cobo, Isabelle Boutron, David Moher, and Darko Hren. “A Scoping Review Protocol on the Roles and Tasks of Peer Reviewers in the Manuscript Review Process in Biomedical Journals.” BMJ Open 7, no. 10 (October 1, 2017): e017468.

Helmer, Markus, Manuel Schottdorf, Andreas Neef, and Demian Battaglia. “Research: Gender Bias in Scholarly Peer Review.” eLife 6 (March 21, 2017): e21718.

One of two prominent 2017 demonstrations that editors select more male reviewers.

Lerback, Jory, and Brooks Hanson. “Journals Invite Too Few Women to Referee.” Nature News 541, no. 7638 (January 26, 2017): 455.

One of two prominent 2017 demonstrations that editors select more male reviewers.

Ortega, José Luis. “Are Peer-Review Activities Related to Reviewer Bibliometric Performance? A Scientometric Analysis of Publons.” Scientometrics 112, no. 2 (August 1, 2017): 947–62.

Rooyen, Susan van, Tony Delamothe, and Stephen J. W. Evans. “Effect on Peer Review of Telling Reviewers That Their Signed Reviews Might Be Posted on the Web: Randomised Controlled Trial.” BMJ 341 (November 16, 2010): c5729.

Rooyen, Susan van, Fiona Godlee, Stephen Evans, Nick Black, and Richard Smith. “Effect of Open Peer Review on Quality of Reviews and on Reviewers’ recommendations: A Randomised Trial.” BMJ 318, no. 7175 (January 2, 1999): 23–27.

Ross-Hellauer, Tony. “What Is Open Peer Review? A Systematic Review.” F1000Research 6 (August 31, 2017): 588.

This literature review distills over 100 partially conflicting definitions of “open peer review” into a taxonomy of seven traits.

Ross-Hellauer, Tony, Arvid Deppe, and Birgit Schmidt. “Survey on Open Peer Review: Attitudes and Experience amongst Editors, Authors and Reviewers.” PLOS ONE 12, no. 12 (December 13, 2017): e0189311.

This survey of over 3,000 respondents (almost 30% coming from the life sciences) demonstrates attitudes toward various open peer review traits such as open identities, open reports, etc.

Tomkins, Andrew, Min Zhang, and William D. Heavlin. “Reviewer Bias in Single- versus Double-Blind Peer Review.” Proceedings of the National Academy of Sciences 114, no. 48 (November 28, 2017): 12708–13.

Ware, Mark. 2015. “Peer Review Survey 2015: Key Findings.” 2015.

This 2015 survey conducted by a publishers’ consortium offers a more conservative perspective than Ross-Hellauer’s 2017 work.

Wilkinson, Jo. “Tracking Global Trends in Open Peer Review.” Publons, October 27, 2017.

Commentary and reviews

Bastian, Hilda. 2017. “The Fractured Logic of Blinded Peer Review in Journals.” Absolutely Maybe (blog). October 31, 2017.

Bloch, Daniel. n.d. “Expertise in Sciences and the Decision of What Is Publishable: A Noble yet Endangered Task.” The Conversation. Accessed December 30, 2017.

Cohen. 2017. “The next Stage of SocArXiv’s Development: Bringing Greater Transparency and Efficiency to the Peer Review Process.” Impact of Social Sciences (blog). October 16, 2017.

Commons Select Committee. n.d. “Peer Review.” UK Parliament. Accessed December 30, 2017.

Epstein, Diana, Virginia Wiseman, Natasha Salaria, and Sandra Mounier-Jack. 2017. “The Need for Speed: The Peer-Review Process and What Are We Doing about It?” Health Policy and Planning 32 (10):1345–46.

Falavigna, Asdrubal, Michael Blauth, and Stephen L. Kates. 2017. “Critical Review of a Scientific Manuscript: A Practical Guide for Reviewers.” Journal of Neurosurgery, October, 1–10.

Flier, Jeffrey. 2016. “It’s Time to Overhaul the Secretive Peer Review Process.” STAT. December 5, 2016.

Golumbeanu, Silvia. 2017. “In Fermat’s Library, No Margin Is Too Narrow.” Nautilus. October 16, 2017.

Fermat’s Library is a platform and online community for discussing scientific papers.

Gowers, Timothy. n.d. “Peer Review: The End of an Error?” TheTLS. Accessed December 30, 2017.

Groves, Trish. 2010. “Is Open Peer Review the Fairest System? Yes.” BMJ 341 (November):c6424.

Hames, Irene. 2014. “The Peer Review Process: Challenges and Progress.” Editage Insights (30-12-2017), June.

Hopfgartner, Gérard. 2017. “What Makes a Good Review from an Editor’s Perspective?” Analytical and Bioanalytical Chemistry 409 (29):6721–22.

Khan, Karim. 2010. “Is Open Peer Review the Fairest System? No.” BMJ 341 (November):c6425.

King, Stuart RF. 2017. “Peer Review: Consultative Review Is Worth the Wait.” eLife 6 (September):e32012.

Kuehn, Bridget M. 2017. “Peer Review: Rooting out Bias.” eLife 6 (September):e32014.

Mayden, Kelley D. 2012. “Peer Review: Publication’s Gold Standard.” Journal of the Advanced Practitioner in Oncology 3 (2):117–22

Panter, Paige. n.d. “Peer Review for Early Career Researchers: Your Basic Questions Answered | Wiley.” Accessed December 30, 2017.

Rodgers, Peter. 2017. “Peer Review: Decisions, Decisions.” eLife 6 (September):e32011.

Ross-Hellauer, Tony. 2017. “Open Peer Review: Bringing Transparency, Accountability, and Inclusivity to the Peer Review Process.” Impact of Social Sciences (blog). September 13, 2017.

Schekman, Randy. 2017. “Scientific Publishing: Room at the Top.” eLife 6 (October):e31697.

Sipido, Karin R., Diane Gal, Aernout Luttun, Stefan Janssens, Maurilio Sampaolesi, and Paul Holvoet. 2017. “Peer Review: (R)evolution Needed.” Cardiovascular Research 113 (13):e54–56.

Slavov, Nikolai. 2015. “Point of View: Making the Most of Peer Review.” eLife 4 (November):e12708.

Staniland, Mark. n.d. “Increasing Transparency in Peer Review : Of Schemes and Memes Blog.” Accessed December 30, 2017.

Tennant, Jon. 2016. “What If You Could Peer Review the arXiv?” ScienceOpen Blog (blog). April 6, 2016.

Tennant, Jon. 2017. “Peer Review Does Not Have a ‘Gold Standard’, but Does It Need One?” Green Tea and Velociraptors (blog). May 30, 2017.

A response to Mayden (2012), this blog post contains a helpful list of existing guidelines to promote best practicesin peer review.

Tomkins, Andrew. n.d. “Understanding Bias in Peer Review.” Research Blog (blog). Accessed December 30, 2017.

Wilkinson. 2017. “Writing a Peer Review Is a Structured Process That Can Be Learned and Improved – 12 Steps to Follow.” Impact of Social Sciences (blog). May 17, 2017.

Wingfield, Brenda. n.d. “The Peer Review System Has Flaws. But It’s Still a Barrier to Bad Science.” The Conversation. Accessed December 30, 2017.

Yoder, Jeremy. 2014a. “Why We Don’t Sign Our Peer Reviews.” April 9, 2014.

Yoder, Jeremy. 2014b. “Why We Sign Our Peer Reviews.” April 9, 2014.

Educational material

“Focus on Peer Review – Nature Masterclasses.” Nature Masterclasses. Accessed December 30, 2017.

Glonti, Ketevan, Daniel Cauchi, Erik Cobo, Isabelle Boutron, David Moher, and Darko Hren. “A Scoping Review Protocol on the Roles and Tasks of Peer Reviewers in the Manuscript Review Process in Biomedical Journals.” BMJ Open 7, no. 10 (October 1, 2017): e017468.

Panter, Paige. “Peer Review for Early Career Researchers: Your Basic Questions Answered | Wiley.” Accessed December 30, 2017.

“Peer Review Training: ACS Reviewer Lab.” Accessed December 30, 2017.

“Publons Academy Supervisors.” Publons. Accessed December 30, 2017.


Should reviewers be expected to review supporting datasets and code?

by John Helliwell, Emeritus Professor of Chemistry University of Manchester and DSc Physics University of York (@HelliwellJohn)


For the meeting entitled “Transparency, Reward, and Innovation in Peer Review in the Life Sciences” to be held on Feb. 7-9, 2018 at the Howard Hughes Medical Institute in Chevy Chase, Maryland ( I have been asked by The Wellcome Trust to open the discussion on the question in my title.

In my view peer reviewing research article submissions to journals is arguably one of the most important roles we scientists play.  Through this process we seek to improve the research of our peers, highlighting errors and omissions and work to ensure that scientifically flawed research does not get published.  To perform this work effectively however – especially in our new data-driven age – it is crucial that peer reviewers are given unfettered access to the data and code underlying the research we are reviewing.  Unfortunately, while many journals provide access to this data after an article’s publication,* most journals do not provide access to this material during the refereeing process, making it almost impossible to perform an effective peer review function.

In this blog post I will discuss why peer review of the  underpinning data of a research article is important – using examples from the my field of crystallography – and outline some steps which funders and publishers could take to implement peer review of data.  Continue reading

Should scientists receive credit for peer review?

by Stephen Curry, Professor of Structural Biology, Imperial College (@Stephen_Curry)

As the song goes – and I have in mind the Beatles’ 1963 cover version of Money (that’s all I want) – “the best things in life are free.” But is peer review one of them? The freely given service that many scientists provide as validation and quality control of research papers submitted for publication has its critics. Richard Smith, who served as the editor of the British Medical Journal from 1991 to 2004, considered peer review to be “ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant.” Although my own experience, and that of many colleagues, is that peer review mostly provides valuable clarification and polishing of submitted manuscripts, Smith is worth listening to because there are growing concerns about the inability of peer review to provide a sufficient test of the integrity of the scientific record. That trend should worry everyone involved in scholarly publication. Continue reading

New directions for ASAPbio: outcomes of the July 19 workshop

On July 19, preprint service providers, funders, and researchers gathered in Cambridge, MA and via videoconference for a live-streamed ASAPbio workshop about the evolving preprint ecosystem (see video recording and collaborative notes). The goal of the meeting was to assess outstanding needs in light of recent developments, including CZI’s partnership with bioRxiv. At the meeting, representatives from a number of preprint servers shared an update on their planned developments, and much of the agenda was devoted to discussing how communities of stakeholders can work together to promote constructive developments, both technological (manuscript conversion and screening tools and formats) and social (best practices and increased awareness in diverse communities).

Terminating the Central Service RFA

In learning more about CZI/bioRxiv’s plans, ASAPbio—in collaboration with representatives from the Consortium of Funders supporting Preprints in the Life Sciences—have decided to terminate the request for applications for the Central Service and the development of bylaws and election of a governance body for a Central Service. Many of the goals of the RFA (making preprints easier to find, accessible by machines, and capable of scaling to accommodate a significant fraction of the literature in the life sciences) will be accomplished by the CZI partnership and other developments in this rapidly evolving ecosystem. While we are not pursuing the CS/governing body at this time, we will continue to monitor the preprint space and may potentially revisit infrastructure developments if there is a strong need.

We thank all of the respondents who developed outstanding applications, two of which have been shared publicly, a group of ~30 individuals who provided considerable work and advice toward the development of the governance principles and bylaws, and all members of the larger community who have provided feedback and advice to us.

What’s next for ASAPbio? Preprint standards, awareness, and new directions

At our July 19 workshop, a general consensus was reached around the need for standards and best practices for preprints. This need was articulated most strongly by funding agencies who would like to be able to direct their grantees in selecting an appropriate repository for sharing their results. Currently, there are no broadly agreed-upon best practices or mechanisms by which to evaluate preprint servers in areas such as metadata, preservation, access, screening/manuscript removal, and manuscript scope/completeness. The funders have encouraged ASAPbio to create such standards in consultation with the scientific community, funders, and preprint servers. This effort will allow funding agencies to more readily adopt and use preprints, and it will also increase the reliability of preprints as a form of scientific communication. ASAPbio is committed to transparency and community engagement in all of our work, and we look forward to hearing your feedback as this process moves forward over the next several months.

Furthermore, the meeting identified a strong need for increased awareness of preprints among many communities of researchers. ASAPbio will continue to partner with our ambassadors and others to promote discussions about preprints.

ASAPbio stands for Accelerating Science And Publication in biology. We are working toward the ultimate goal of improving the entire process of communicating research. In addition to preprints, other elements of the publishing system require the attention and involvement of the scientific community. We are currently exploring these directions and will share news about upcoming plans in the near future.


ASAPbio newsletter vol 10 – Meeting on 7/19, licensing task force

Dear subscribers,

The preprint ecosystem is growing rapidly. The CZI/bioRxiv partnership will fuel the expansion of the leading preprint server in the life sciences, and many other servers and platforms with varying degrees of disciplinary overlap exist or are planned (arXiv, PeerJ Preprints,, OSF Preprints, ChemRxiv, SSRN, SciELO, PsyArxiv, EngArXiv, SocArXiv, Authorea, F1000Research, etc). Funding agencies are enacting policies supporting preprints, such as those developed by the Wellcome Trust and the Medical Research Council, while agencies like the NIH have gone a step further and developed guidelines for selecting a preprint server.

ASAPbio is now working to identify any gaps/opportunities in the preprint ecosystem, which will help to inform the revision of ASAPbio’s plans before the close of our RFA suspension.

Toward this end, we’re hosting a one-day meeting in Cambridge, MA on Wednesday, July 19th. The attendees—including funders, researchers, and leaders of preprint services—will discuss new developments, opportunities for collaboration, and perspectives on standards and best practices. A tentative agenda and attendee list can be found here. The meeting will be live-streamed via our YouTube channel, and we invite you to participate live by tweeting your questions and comments with #ASAPbio.

Meanwhile, we’re also working on understanding stakeholder attitudes toward preprint licensing, which is becoming an important topic as journals and funders begin to release policies in the area. We’ve established a preprint licensing task force that will study funder, journal, and researcher needs, provide informative resources, and potentially recommend licenses that benefit the public good.

Jessica Polka
Director, ASAPbio

ASAPbio launches preprint licensing task force

Every month, more and more life scientists are choosing to post a preprint. This decision can give scientists visibility in their field, establish priority of their work in progress, gain recognition by funding agencies, and elicit feedback to improve their manuscript.

But once the decision to preprint is made, authors posting to some servers are faced with another important choice: which license to pick—that is, what they will allow others to do with their work. The aggregate effect of such choices will expand or restrict the possible future benefits of preprinting.

Permissive licenses, such as CC-BY, remove barriers to the innovative reuse of content. Such reuse could include new display tools incorporating annotation (such as SciLite and SourceData), discovery tools that excerpt passages to create summaries, archives for preservation, or use of the figures in educational materials. Restrictive licenses, on the other hand, could create barriers for reuse scenarios, such as those that create modified versions of the work or that are commercially motivated. The choice of license has implications not only for potential uses, but also for the journal that ultimately publishes the papers.

The current state of preprint licensing

Some preprint servers ( and PeerJ Preprints) apply a CC-BY license by default, making all preprints full open access. Others (arXiv and bioRxiv) only require authors to grant the preprint server a license to post the article; in addition, they offer a range of creative commons licenses or the choice to retain all their rights.

Presented with choices, authors do not prefer permissive licenses at bioRxiv or arXiv. However, based on initial conversations, we find that licensing and the ramification of different licensing are issues that are very poorly understood by the scientific community.

Recently, some funders and journals have also entered the conversation. The NIH encourages the use of CC-BY licenses for preprints, while some journals have implemented policies that concern preprints posted under certain licenses, but not others.

ASAPbio’s Preprint Licensing Task Force

To stimulate informed conversation on the licensing of preprints, ASAPbio has established a Preprint Licensing Task Force.

The goals of the task force are to:

  • Understand stakeholder (author, funder, publisher, and preprint server) attitudes toward these licenses
  • Create resources to help inform stakeholder decisions
  • Potentially recommend preprint licenses that will maximize scientific progress and social benefit

The Task Force began work in late May of 2017. It is chaired by Dick Wilder, Associate General Counsel at the Bill & Melinda Gates Foundation and non-voting affiliate on the ASAPbio Board of Directors, and contains among its members researchers, lawyers, and representatives of funding agencies and journals.

If you would like to provide your perspective on preprint licensing to the Task Force, please contact Jessica Polka (