The ASAPbio Community is a global and diverse group of researchers and other stakeholders in science communication. While they bring varied expertise and opinions, they all share an interest and support for the use of preprints. Our Community members had expressed interest in hearing a broader range of perspectives about preprints, beyond the pro-preprint views shared within our Community, and so in our March Community Call we did exactly that by inviting two speakers who agreed to join us to share their reservations about preprints.
Our first speaker was Seth Leopold, Professor at the University of Washington School of Medicine and Editor-in-Chief of the journal Clinical Orthopaedics and Related Research. Dr Leopold argued that in his view, the benefits of preprints tend to be overemphasised while the potential risks are downplayed. While the speed of dissemination is often highlighted regarding preprints, he indicated that in the context of patient care, it is more important to get it right than to get it out quickly, and that there are considerable risks to patient trust if disseminated work later turns out to be incorrect. Dr Leopold noted that it appeared that some may be using preprints as a means toward credit while circumventing the traditional gatekeeping mechanisms provided by journals; in clinical research, one important role of journals is to work with authors to tone down conclusions of the research and outline its limitations; this is not the case for preprints. Another important goal is factual accuracy, and minimizing errors in presentation; if a preprint and a published (peer reviewed, edited) version of record both are in circulation, there is nothing to keep future researchers from finding the preprint and missing the reviewed and edited version of record. Although some preprint servers link to definitive versions of record, many do not. Additionally, Dr Leopold felt that preprints are not adequately marked as not peer reviewed, and journalists and the public may have difficulty distinguishing a preprint from a published, peer-reviewed article. He gave examples where this has occurred in ways that resulted in misinformation or even disinformation being disseminated (note: ASAPbio has developed resources for preprint servers, researchers, institutions, and journalists on how to most accurately display and describe work in preprints, available here). Overall, Dr Leopold felt that insufficient consideration is given to the trade-offs between the benefits of preprints for the individual author vs the risks to stakeholder groups he felt were more important (specifically physicians and the patients whom they treat). He encouraged the audience to consider the consequences of preprints for those other stakeholders. He ended with the point that once the public’s trust is lost, it will be difficult for scientists and clinician scientists to regain it.
Our second speaker was Howard Browman, Principal Research Scientist at the Institute of Marine Research in Bergen and Editor-in-Chief of the ICES Journal of Marine Science. Dr Browman started off noting that judging by the growth of preprints over recent years, it could be said that the preprint ‘train has left the station,’ however, from his point of view, the train has left without a security check, and without all of its passengers. To argue the first point, Dr Browman outlined what he perceived as inconsistencies about preprints: are preprints considered publications? They are not for the purposes of allowing consideration at a journal, but they are for the purpose of citing preprints. Can we actually consider them pre-prints? Some of the papers posted on preprint servers are never published (i.e. there is no print for those), many are posted after submission to a journal (thus, how are they pre -prints?). Do preprints accelerate science? For preprints later published in journals, the mean time between posting and publication is 4-6 months (Abdill & Blekhman); in many fields, it is unclear that saving 4-6 months impacts the pace of discovery. Do preprints allow community feedback? Many preprints are posted after submission to the journal (Kent Anderson), which contradicts the stated benefit of allowing authors to receive comments and improve their work before submitting it to a journal. Regarding the preprint ‘train passengers,’ Dr Browman noted that the proportion of preprints to journal publications across all fields is currently 8-9% (source: Dimensions database), so there are still many who are not using preprints. In physical sciences, where arXiv has been in operation for decades and use is perceived as common, the proportion of preprints to journal articles stands at 18% (source: Dimensions database, field of research = physical sciences); this raises the question of whether preprints in biology will ever get to a point where they are a common way of disseminating research. Dr Browman also questioned whether preprints will really redress the concerns around credit and research assessment often raised by researchers, given that those issues are broad and structural in nature.
After the two presentations, ASAPbio Fellow Yamini Ravichandran led a lively conversation with the speakers introducing questions and comments from the audience. A few items we explored further included:
- There were a few comments about credit and how preprints provide an opportunity for early career researchers to gain recognition for their work. The speakers agreed that advancing the careers of scientists, especially early-career researchers, is important, but they were concerned that preprints may prioritize credit for the individual over the potential risk for patients or trust in science.
- Regarding the risks of non peer reviewed content being disseminated, is there not a risk in implying that a piece of research is right just because it has been peer reviewed? The speakers noted that no system is perfect, and peer review is not infallible. Still, the journal peer review process provides a mechanism to minimize error, nuance claims, and ensure that limitations are clearly outlined by the time a paper gets disseminated. The presenters both argued that it is difficult to make a convincing argument that no peer review—the vast majority of preprints receive no comments or feedback—would be better than imperfect peer review.
- If the ‘train has left the station’ are there any modifications to the preprint system that would help mitigate the concerns? The speakers made several suggestions. First, preprint servers could do a better job of marking preprints as not peer reviewed; research should not be considered evidence until it’s been peer-reviewed and published. Finally, Dr Leopold asked whether preprints should be removed if after a reasonable period of time they’ve not been published; mitigating positive-outcome bias can be achieved in other ways (such as prospective research registration).
There were more questions and comments than time available, but we had a rich conversation and we thank the speakers again for sharing their perspective with us. Dr Leopold has also kindly provided answers below for a few additional questions raised at the call.
Having an understanding of the concerns that different stakeholders have is an important element of our work to ensure we can provide relevant resources and information to all stakeholders, and help the preprint train continue on a safe journey.
Additional questions raised during the Community Call
Do you think the early release of medical research on COVID, for example, has helped to think about / shape government strategies and public health in real time of the pandemic?
SSL: There is no question that federal funders are interested in rapid dissemination; as you know, there is some support among grant funding agencies in the US and elsewhere in what preprints have to offer. Having said that, we have a major signal-to-noise problem in biomedical research (it’s difficult to find the “good stuff” amidst so much low-quality and, in the case of preprints, unvetted available work). That problem is only being magnified by preprint servers, and it got especially bad during COVID. There was a tremendous wave of preprints on COVID that have been posted; a tiny, tiny fraction have been informative. I saw a comment from a JAMA editor who said that he was receiving many papers every day asking uninteresting questions—his example was (I think he said dozens or hundreds) of papers asking the question “What happens to a patient’s hemoglobin level following a blood transfusion during the COVID pandemic?” (The answer, of course, is the same thing to a patient’s hemoglobin level when COVID isn’t involved — it goes up). At our journal, the silly analogue was “What happens to surgical volume when the operating rooms all close because of COVID?” (Again, the answer is the same as any other time that the OR’s close — surgical volume drops precipitously). All of those papers can be—and many were—posted to preprint servers. Journals like JAMA (and our journal) serve an important curatorial function and spare readers all of this noise so that they can focus on signal. While it may advance a researcher’s career to post papers like those that I’ve shared in these examples, doing so doesn’t help doctors or their patients. This screening and curation function is handled by journals, and it’s an important function.
Instead of rejecting preprints in total in medical/clinical journals, do you think we can use their openness and early dissemination to aid their peer review, in general and in the context of medical research? As of now, some journals are mandating or integrating preprinting as a part of their editorial process, and attempting integration; what are your thoughts on this?
SSL: The use of the term “openness” here is troubling to me. It implies that before preprints (or without them) we did not (or would not) have openness, which I don’t believe is true. As I mentioned in my talk, I don’t think there’s anything particularly transparent about a clinician-scientist writing whatever (s)he wishes to, without having to respond to a review process that might modify overstated elements of the message, insist on caveats, identify shortcomings, correct errors, and require disclosure of industry involvement in the research. I think that a better approach to mitigate publication bias—one that was already available before preprinting, and one the better journals already insist on—would be using trial registration (eg, www.clinicaltrials.gov) before starting one’s research. One should register the research before beginning, and then submit the work—with data and interpretation—for peer review scrutiny when it’s ready to go. All the benefits of transparency, mitigation of positive-outcome bias, searchability for future meta-analysts—without the major harms that I associate with preprint servers. For those concerned about pace: Many of the better journals have upped their games in terms of throughput times, and all the good ones have fast-track processes for genuinely important work. I gave some examples of this during COVID in my talk. It’s better that we get things right than do them fast.
Curious about thoughts on the viability of journals leveraging their review communities to sustainably curate review post preprint publication (i.e. via a publish – review – curate model)?
SSL: I’m sorry that I don’t understand this question. I think review should be done before publication in all situations for the reasons I mentioned. One thing that I think journals can do a better job of, though, is post-publication dialogue. All of my favorite non-medical information sources (newspapers, magazines, blogs) have great “comments” sections that are lively, thoughtful, and in some cases, nearly real-time. Journals still are fairly hung up on “letters to the editor” which usually come out more quickly than articles do, but hardly the same as a “comments” section. As an editor, I see why journals struggle with this (some level of moderation to keep trolling manageable, avoid false or libelous claims, etc, is important and takes resource), but I think we need to improve the way we do this to stay relevant as people grow accustomed to a faster pace of dialogue. This may not be exactly what you are asking me, and if not, I apologize. Thank you again for inviting me to participate here.
Thanks to Drs. Leopold and Browman for their fantastic contribution to this event, and for raising these important concerns. We offer a few thoughts in response to some of the points.
Regarding the balance between speed and accuracy, we agree that peer review certainly helps to filter research and improve the overall quality of the literature. Preprints can help to make peer review more robust by exposing manuscripts to a broad audience at a time when the authors can readily address problems. Clearly labeling preprints as such, together with encouraging and surfacing expert commentary on those preprints, could help to achieve this aim while allowing other stakeholders to interpret preprints with appropriate skepticism.
In the life sciences, it is indeed common practice to preprint at or around the time of journal submission, a workflow that is reinforced by integrations between preprint servers and journal submission systems. While this means that feedback on preprints may be received in parallel with journal peer review, it does not prevent authors from using it to improve a revised version of the paper for publication. We feel that any time saved in communicating about research is beneficial, and the median of 4-6 months saved will compound with each publication cycle into much greater savings over time.
Regarding the issues of signal-to-noise, we agree with Clay Shirkey’s assessment that the real problem is not information overload, but filter failure, which can be addressed with robust search and discovery algorithms.
We resonate strongly with Dr. Leopold’s message that the needs of other stakeholders must be weighed in consideration of publishing practices. Preprints, like any other innovation in science communication, should be developed in a way that allows open and inclusive participation while bearing in mind what the benefits and challenges are for all stakeholders. While we agree with the need to evaluate and mitigate any risks, we also note that the availability of preprints has supported the prompt dissemination of results toward the management of the COVID-19 pandemic, thus also bringing potential benefits to patients and society. Further research into how preprints support scientific discovery, and on any associated challenges, will be valuable to help inform assessments on their impact for each of the stakeholders.
A more robust and rapid research communication system will ultimately benefit society as a whole, and we must work to mitigate any challenges that emerge in such a transition through transparent labeling and engagement about the nature of scientific evidence, preprints, and peer review.
Jessica & Iratxe