Welcome to the new ASAPbio website! See what’s on the roadmap for 2025.
What is public preprint feedback?
Any public reaction to a preprint, ranging from tweets to comments to formal peer review.
Where does it happen?
Many preprint servers offer commenting and annotation features. Sever et al.’s 2019 survey (see image) found that public feedback on bioRxiv preprints is primarily received on Twitter and in the comment section. An analysis of bioRxiv comments byMalicki et al. found that 12% of non-author comments resembled journal reviews.
Preprint discussion platforms organize commenting activity into a variety of workflows and enhance the discoverability of feedback. Learn more about preprint feedback initiatives atReimagineReview.
How can I find it?
Many preprint servers have commenting functionality, and some link to social media commentary or external peer review sites. For example, bioRxiv has created an evaluation dashboard to organize feedback from multiple sources, including social media.
Twitter is a common venue for preprint discussions. If a server doesn’t display tweets discussing the paper, you can input its url as a search term at twitter.com.
Preprint reviews from organized communities can be easily found at Sciety and Early Evidence Base, both of which provide tools to search and filter preprint evaluations. Some reviews are also linked from individual preprint records at EuropePMC.
Wherever you find preprint reviews, be sure to check for an author response to make sure you are considering their perspectives. In venues where anyone can contribute to a stream of feedback (for example, preprint server commenting sections, Hypothesis, or Twitter), look for a reply from the authors. If the review appears in venues that don’t directly support author responses (such as GitHub), you can search for the URL of the review to identify pages that mention it.
What are the benefits of making preprint feedback public?
Any constructive feedback on a preprint, including feedback sent privately to the authors, is beneficial: it helps authors improve their paper at a relatively early stage of its development, and it can include feedback from anyone, rather than the handful of reviewers that might be consulted by a journal.
Bringing that feedback into the open can have several benefits for a variety of stakeholders.
For authors, public preprint feedback can:
- Increase the rigor of scientific work because criticisms and concerns can be validated by others.
- Enable them to publicly rebut any lingering or recurrent criticisms.
- Reduce rounds of re-review.
For readers, public preprint feedback can:
- Provide context and expert evaluation of the work. For example, rapid and public evaluation of COVID-19 preprints helped non-specialists understand new developments.
For feedback providers and reviewers, public preprint feedback can:
- Include more researchers in the peer review process than those invited by journals, including those from underrepresented geographical areas and career stages.
- Enable peer review to be reused by other journals or evaluators, reducing time wasted on rounds of review.
- Act as a sample of your reviewing work, enabling journals to more confidently extend invitations to join editorial boards and reviewer pools
For journals and publishers, public preprint feedback can:
- Identify potential reviewers and editorial board members from preprint feedback they’ve provided.
- Inform the decisions of editors at the triage stage.
- Complement formal peer review reports commissioned by the journal.
What are the concerns?
One major concern surrounding public preprint feedback is it may harm the reputations of authors who receive it. The overwhelming majority of papers are criticized—often heavily—in the process of journal peer review, but this feedback is seldom seen (unless the journal participates in the constructive practice of publishing peer review). Open review may give uncritical readers the impression that the paper in question is less robust than those with no public feedback, even if they are actually of similar quality. However, as the volume of public review increases through efforts like eLife’s policy of posting reviews on preprints, preprints with public reviews are likely to become normalized.
On the other hand, another concern is that public review may cause referees to adopt a more collegial approach to their feedback. While this likely has positive consequences, it may also cause some reviewers to “pull punches” and avoid or soften criticism, especially if reviewers are compelled to sign their reviews. At the same time, some of the most well-circulated examples of preprint feedback (for example, the deluge of comments on an infamous, and later withdrawn, preprint) include readers pointing out serious flaws in the work. This suggests that some readers are willing to call out serious problems, at least when the stakes for public health and understanding of science are high.
Open participation in peer review and feedback, like other forms of review, is vulnerable to bias and gaming. Similar to the manipulations that occur in peer review rings that function within journals, reviewers could enlist colleagues to leave favorable comments without properly disclosing competing interests. However, public availability of feedback would make it relatively easy to identify a pattern of biased reviews as compared to those submitted behind closed doors to a variety of journals. Furthermore, scientists want and need critical but fair feedback. Merely rubber stamping papers will harm the reputation of the peer reviewer or service.
Even without malicious intent, readers are more likely to see, read, and ultimately highlight and comment on preprints within their network. Institutional, geographic, or other disparities in preprint review can amplify Matthew effects.
These issues are discussed further in other FAQ listed below.
How can preprint review contribute to equity?
Preprints have great potential to democratize access and production of knowledge. However, due to availability bias, researchers tend to be most aware of work occurring within their own network; highlighting and sharing only these preprints can contribute to the Matthew effects and limit readers’ exposure to a sliver of available science. Looking outside of the obvious candidates when highlighting or amplifying preprints can help. You can:
- Search regional servers. Because some servers may overrepresent certain countries (Abdill et al., 2020), check regional preprint servers such as AfricArxiv and RINarxiv to find papers that may be less visible to your colleagues outside those regions, or include the names of specific countries in your search to find work originating from those settings.
- Publish and search preprints in various languages. An increasing number of preprint repositories accept submissions in more than one language. Make strategic searches by keywords in English and their translations to your mother tongue. Check if a preprint already has one or more translations (of the abstract) available. For more on the importance of multilingualism, see the Helsinki initiative.
- Find preprints that have yet to be blogged or tweeted about. For example, when looking through bioRxiv for her #365preprints project , Prachee Avasthi recommends selecting preprints that have yet to be tweeted. For the same reason, avoid using Twitter as your only source for finding preprints.
Professional review coordinators and preprint feedback services can also play a role by adopting policies and practices that encourage equity and inclusion.
Thanks to Dasapta Erwin Irawan, Jo Havemann, and Stefano Vianello for their input.
What’s it like to receive public preprint feedback?
Click on each link below to read the full thread on Twitter.
- Jacob Scott summarizes manuscript changes prompted by feedback from Twitter.
- A colleague who provided feedback to Dan Quintana became his coauthor on a revised version of the preprint.
- Amanda Haage incorporated public feedback into her revised preprint about the faculty search process.
- Michael Marty reacted to a journal-independent preprint review process coordinated by James Fraser.