Amid the COVID-19 pandemic, preprints are being shared, reported on, and used to shape government policy, all at unprecedented rates and journalists are now regularly citing preprints in their pandemic coverage. As well as putting preprints squarely in the public eye as never before, presenting a unique opportunity to educate researchers and the public about their value, the rise in reporting of research posted as preprints has also brought into focus the question of how research is scrutinised and validated. Traditional journal peer review has its shortcomings and the number of ways research can be evaluated is expanding. This can be a problem for journalists and non-specialist readers who sometimes don’t fully understand the difference between preprints peer-reviewed articles and different forms of peer review. Media coverage can result in the sharing of information which may later not stand up to scientific scrutiny, leading to misunderstanding, misinformation and the risk of damaging the public perception of preprints and the scientific process.
ASAPbio, with support from the Open Society Foundations, aims to consolidate and expand on existing efforts to set best practice standards for reporting research posted as preprints via the launch of our Preprints in the Public Eye project. Read more in the project announcement. To get involved, email Project Coordinator Jigisha Patel at email@example.com.
News and updates
We called for involvement from preprint servers, institutions, librarians, journal editors, journalists and more. We’re now seeking feedback from all communities on the document, Guiding principles and resources to aid responsible media reporting of research posted as preprints, developed by three stakeholder working groups (details below). This document aims to facilitate and encourage the transparent and responsible reporting of research in terms of the level of evaluation it has undergone. Please provide your opinion with the form below.
Document preview and feedback form
The document includes recommendations for preprint servers on best practice for providing information, via labelling, about the level of evaluation research posted as preprints has undergone. A user experience project was carried out to help inform this guidance. The outcome of the project aligned with the recommendations in the document.
Preprint labelling user experience project
The aim of the project was to determine whether readers notice the labels that currently exist on preprint servers, whether they can distinguish between a preprint and a peer reviewed article and what would make the labels more noticeable and easier to understand.
- Twelve users (5 laypeople, 3 journalists, 3 researchers and 1 clinician) were asked to look at research related to Covid-19 posted on a preprint server (Research Square, medRxiv, bioRxiv and SSRN) and comment on what they noticed.
- Three users did not notice the labels stating that the preprints had not been peer reviewed.
- Five users wanted to get a better understanding of the preprint process and selection. For example, one said, “I don’t know what they are doing to validate their articles.”
- Suggestions to make the labelling more noticeable included: making the labelling bigger, moving the labelling to below the title to make it more noticeable, and creating a pop-up message for the reader to acknowledge.
- Users were able to distinguish between a preprint and published version of the same article because of the ‘professional’ look of the published article.
This project highlighted the need for noticeable labelling and clarity about the screening criteria used to determine what is posted on a preprint server.
The document also presents guiding principles for researchers and institutions on reporting research in the media and tips and resources for journalists and science writers on reporting research.
The following individuals contributed to the document, but involvement does necessarily imply institutional endorsement.
Dan Valen, Figshare
Quincey Justman, Cell Systems, Cell Press
Michael Markie, F1000 Research
Theo Bloom, medRxiv
Michele Avissar-Whiting, Research Square
Alex Mendonça, SciELO
James Brian Byrd, University of Michigan
Shirley Decker-Lucke, SSRN
Sowmya Swaminathan, Nature Research & Springer Nature
Tom Ulrich , Broad Institute of MIT and Harvard
Elisa Nelissen, KU Leuven
Roberto Buccione, IRCCS Ospedale San Raffaele & Università Vita-Salute San Raffaele
Emily Packer, eLife
James Fraser, University of California San Francisco
Support for this project provided by the Open Society Foundations