By Damian Pattinson, Research Square
Introduction
The term ‘peer review’ has come to mean any assessment performed on a manuscript prior to publication. So for any paper submitted to a journal, it ‘undergoes peer review’ and it is published. But in actual fact, the act of preparing a manuscript for publication requires a huge number of checks, clarifications, tweaks and version changes, from many different parties. But because of the tradition of confidentiality during the editorial process, much of this work has gone unnoticed by the academic community.
The checks a journal performs depend upon its editorial policies, and these policies have helped shape research practice in very positive ways over the years – things like mandatory trial registration, ethical approval for animal research, and deposition of sequence and expression data have only become standard practice because of strict journal policies. But the question of how these policies are policed is not a straightforward one. There has never been a full examination of Who Should Check What in a research manuscript, and this has led to confusion on the part of the reviewers and readers about what has been assessed and what has not. In this article I break apart the assessment process and propose a way in which all the relevant pieces can be assessed, be it on a journal or a preprint.
How editorial standards improve research
When a paper is submitted to a journal for publication, a lot of work is done by the editorial office to make sure it is suitable for peer review. The list of issues that regularly arise in submitted manuscripts is long, complicated and often ugly. For example, a study could be
- Plagiarized
- Poorly written
- Full of undisclosed competing interests
- Lacking ethical approval
- Lacking permission to perform experiments
- Lacking consent from participants
- Defamatory
- Offensive
- At risk of being repurposed for nefarious means (dual use)
- Containing inappropriately manipulated images
- Missing key methodological information
- Fraudulent
- Using misidentified cell lines
- Containing stolen data
- Unregistered
- Written in a ‘paper factory’ who will sell authorship once it has been accepted for publication (yes, this does happen)
Then there are any journal specific policies, such as availability of underlying data or adherence to community reporting guidelines, which must also be looked at in detail. In an ideal world, a journal editor would spend time checking for all of these things systematically ahead of peer review, to save reviewers time looking at papers that had fundamental problems. But in reality, many of these are left up to the reviewers to attend to, even if they are not explicitly asked, or even qualified, to do so. This is a problem.
For the system to function more effectively, the journal needs to be explicit about what has been assessed by the editorial office, and what they are asking the reviewer to assess. This would involve a far more transparent QC process and a structured peer review form that only asks reviewers to comment on particular elements of the manuscript. Similarly, any future world that aims to perform assessment on preprint servers in the absence of an editorial decision-maker (as have been put forward on these pages) needs to allow for the fact that there are some things that peer reviewers are qualified to attend to, and some things that require other expertise.
Without these checks, it is not hard to imagine a situation where a preprint can be judged to be legitimate by a number of expert reviewers, but fails to meet the most basic standards of, for example, ethical oversight. Even if the work itself is not unethical, it creates the impression that unapproved work can legitimate, diminishing the case for prospective approval on future research, and an inevitable rise in unethical practices. Without editorial standards, we quickly find ourselves in a world where animal or human ethics, or cell line provenance, or data availability, count for nothing, and research integrity takes a huge leap backwards.
A method for maintaining editorial standards on preprints
While journals can demonstrate the value they add through transparent editorial checks, the same could apply to preprints. It is unreasonable to expect volunteer editors on preprint servers to have the knowledge, or the time, to assess these papers for integrity, ethics and reporting. An alternative system is needed.
Our proposed solution is a modular form of publishing, which allows authors to pay for checks that they feel are relevant for their paper, and receive certifications that appear as badges on the manuscript itself. These checks are performed by PhD-level scientists who understand the basic areas and techniques that the manuscript covers, but also have training in journal ethics and reporting standards to ensure high levels of rigor and integrity. In essence, a band of professional managing editors, available on demand, to assess and certify specific elements of a manuscript.
We at Research Square have started building such a system focusing on two main areas: integrity and reporting/reproducibility. We already work with a number of journals to certify papers in these areas. But our badges can also be applied in other settings, for example on preprints. Here, an author would pay to receive an evaluation and, if successful, a certification, so that any reader, reviewer, or funder can easily see that the manuscript meets necessary standards. Additional badges, covering other community reporting standards, data availability, and statistical rigor are in development.
In previous work, I and others have made the distinction between the state and standing of a manuscript (Neylon et al. F1000Research 2017). By this, we mean that a manuscript going through any assessment exercise (for example journal peer review) undergoes numerous changes to its basic characteristics (its ‘state’), but also to its perceived value (its ‘standing’). Standing can change without the object itself changing, for example by a respected person praising, or rubbishing, its findings. Research Square badges serve to expose previously unseen changes to state (i.e., work that is usually done out of sight within journal workflows) in order to raise standing. Even if an article passes all our checks without a single revision, it still gains a badge of certification, raising its standing in the eyes of readers and potential reviewers (and, perhaps further down the line, journal editors).
Conclusion
The assessment of scientific research requires many different forms of expertise. In the current world of journals, many of these come from peers within the community, and many come from journal editors. By ‘decoupling’ this assessment process, we can increase transparency, confidence and efficiency without shortchanging the peer review process.
Once decoupled, editorial oversight can be applied to other settings, allowing research posted anywhere to demonstrate high standards. For example, research that does not currently make it through to the journals, such as replication or negative studies, but which hopefully will start to see the light of day on preprint platforms, are able to demonstrate the same basic standards that are expected of articles in the formal literature.
A big question remains as to whether authors would be willing to pay for these badges independently. At present the cost of the editorial checks are part of the APC or subscription fee, and authors know that they must adhere to the standards in order to be accepted for publication. Whether they would be willing to pay for such checks in isolation, without the reward of journal acceptance, is unclear, and perhaps will require encouragement from funding agencies and institutions. But if we are to maintain the high standards that we have come to expect from researchers, it is important that we offer a means for authors to demonstrate them. The editorial badges described here will hopefully go some way to providing those means.
1 Comment