David L. Stern
Janelia Research Campus
HHMI
I have been following the discussion about preprints closely over the past few months and I am won over by the arguments that science papers should be made available freely to everyone as soon as authors feel that the work is complete. Posting papers to preprint servers is one good solution; I imagine there are others. (I prefer to call such documents open papers to remove the stigma associated with calling the work “pre” anything.) However, the discussion about the future of open papers has been imbalanced, with too much emphasis on the consequences of open papers for peer review and too little discussion of the fact that scientists are driven to publish in journals because of the existing incentive structure. The CV, and, specifically, journal names (and impact factors, journal reputation, etc.) are used extensively to judge scientists in competitions for jobs, promotions, and grant money. This is the main impediment to widespread adoption of open papers. I have heard many arguments about how it is too hard to change the structure of these competitions and that we should, instead, focus on producing great science in open papers, and let the culture-shift follow. In contrast, I think it is easier to change the incentive structure first; widespread adoption of open-papers will follow, like water flowing downhill.
First, I want to comment on a major preoccupation of the open papers discussion. I am surprised by all the hand-wringing about the review process. If we focus on the real goal—the dissemination of new scientifically-generated knowledge—then any formal review process, either pre- or post-publication, will be deeply flawed. As is widely understood, pre-publication review instantiates many biases that flow from the simple fact that it is not possible to generate an accurate estimate of anything with a sample size of approximately three (three reviewers, in this case). Sometimes, of course, reviewers provide feedback that dramatically improves the core body of the work in a paper, or they detect fraud or other serious problems with a paper. But these cases are rare and do not justify the long publication delays built into the peer-review process itself. And, of course, we all know of many papers in the “top” journals that contain one or more deep flaws that were not caught by the review process. Moreover, any post-publication review process will suffer from similar biases, unless we were to recruit dozens or hundreds of reviewers per paper. Clearly, the scientific community would balk at producing that many reviews and, anyway, who would have time to read all those reviews? Instead, I believe, we will do better to rely simply on the scientific process itself. Over time, good science is replicated, elevated, and established as most likely true; bad science may be unreplicated, flaws may be noted, and it usually is quietly dismissed as untrue. This process may take considerable time—sometimes years, sometimes decades. But, usually, the most egregious papers are detected quickly by experts as most likely garbage. This self-correcting aspect of science often does not involve explicit written documentation of a paper’s flaws. The community simply decides that these papers are unhelpful and the field moves in a different direction.
In sum, we should stop worrying about peer review. I have heard from several colleagues about how this will lead to people publishing huge numbers of papers containing patently untrue statements or lots of junk science. This may happen, but I think it is extremely unlikely that the authors of these papers will be rewarded for these efforts. Instead, papers will come to represent, as they always should have, your scientific honor. As someone once told me, “In science, your name is your brand.” Just like any brand, if you associate lots of junk with it, then people will associate your name with the junk.
The real question that people seem to be struggling with is “How will we judge the quality of the science if it is not peer reviewed and published in a journal that I ‘respect’?” Of course, the answer is obvious. Read the papers! But here is where we come to the crux of the incentive problem. Currently, scientists are rewarded for publishing in “top” journals, on the assumption that these journals publish only great science. Since this assumption is demonstrably false, and since journal publishing involves many evils that are discussed at length in other posts, a better solution is to cut journals out of the incentive structure altogether.
Fortunately, there are simple steps we can take to establish incentives that encourage publication of open papers, freeing scientists to focus more on science and less on the mechanics of publishing. Consider, first, what happens when we are asked to judge candidates for job positions, promotions, grant proposals, or fellowships. We are always provided with a CV and then one or more of the following: a research plan or vision statement, several of the applicant’s best publications, and several letters of recommendation. If we were to focus our assessment efforts on the core products of science—the papers and, to a certain extent, the research plan—and ignore everything else, we would send a clear and strong message to scientists that the incentive structure has changed and that there is every reason to publish open papers.
There are likely to be many ways to improve the structure of review processes. Over the past week, I have come up with a few ideas, and I am sure the community will be able to improve upon these. Here are four concrete things we could do tomorrow to change the incentive structure of science. As shorthand, I will call the heap of documents that one submits for job hiring, promotions, grants, fellowships, etc. the “package.”
First, we should eliminate CVs from packages. This is a radical departure from current practices, so please bear with me and remember that the goal is to focus on applicants’ science. There are many reasons to ban CVs. First, reviewers should not be aware of the applicants’ names (and by extension, gender or race). Second, we should be ignorant of the authors’ affiliations (both in training and current status) in order to eliminate unconscious bias associated with place. Third, if we wish to uncouple the quality of the science from the journal names, then we should not see the journal names anywhere in the application. A long list of Nature articles in one CV versus a short list of trade journals in a second CV will almost certainly lead most reviewers to favor the first applicant, even if they haven’t read a single word of any of the articles. That is precisely the bias we want to eliminate. CVs often contain other useful bits of information, like community service and teaching experience, but these could usefully be incorporated into other parts of a package when they are specifically relevant to the job or grant under consideration.
Second, applicants should submit several papers with their package and these papers should give no indication of the journals they may have been published in or the authors’ names. It would probably be useful to indicate how many authors were involved in the study and whether the applicant is a major contributor to the work.
Third, applicants should write a short summary of each submitted paper in plain language, so that a broader community of scientists can understand the major results and implications of the work.
Finally, we should consider whether letters of recommendation are helpful. In my time in science, letters have rarely provided useful insight. My elders have told me that there was a time when letters were informative. Maybe it really was better in the good old days! But, now, letters serve mainly to soak up huge amounts of time for letter writers and for letter readers, and provide little added value. Almost all letters are full of hyperbole and it is possible to develop almost any opinion from these letters. Is the hyperbole genuine? Is this the best population geneticist who has ever walked on earth? Or, is this letter writer, like almost all others, exaggerating a bit in this arms race of letter writing to promote the career of a favored student? Who knows? I find I often end up reading into letters, rather than reading letters. If the rest of the package gets me excited, then I am pleased to read an over-the-top letter, which apparently agrees with my “independent” opinion. If the package leaves me cold, then I end up searching for somewhat less than completely over-the-top comments in letters. I am really hard put to think of a single letter that changed my mind about a package.
That leaves us with a package that contains three things. First, a research plan or vision statement, where the applicant describes, in their own words, their vision of the state of the field, and how they hope to contribute to advancing knowledge. Second, several science papers of which the applicants themselves are proud. And, third, short descriptions in plain language describing what they showed in the papers.
Reviewers would then have the pleasant job of reading science and a statement about the future of the field. In my experience, great science shines right through great papers: the logic is clear, the writing is lucid, and the conclusions are derived in a straightforward way from the experiments. If we don’t believe that we can judge scientists based on their science, and in the absence of the peripherals—journal names, CV junk, and unhelpful letters—then we are in a dire state indeed.
Instituting these, or similar, principles would dramatically rearrange the incentive structure of science and realign scientists’ priorities with the actual goals of science—to pursue and communicate advances in our understanding of the natural world. All scientists would then, as a matter of course, write open papers.
[aio_button align=”none” animation=”none” color=”gray” size=”small” icon=”none” text=”View PDF” relationship=”dofollow” url=”https://asapbio.org/wp-content/uploads/2016/02/Fix-the-incentive-structure-and-the-preprints-will-follow.pdf”]
…”we will do better to rely simply on the scientific process itself”…
Some years ago we published an article called “Natural selection of academic papers” (http://www.openscholar.org.uk/wp-content/uploads/2015/08/nsap_perakakis_2010.pdf) where we argued that the only thing we need to do to “fix” academic publishing is to remove all intermediaries and let the scientific process take its course and select the most valid and important research via unrestricted, open and transparent community interaction. We finally managed to build an infrastructure than facilitates these unmediated community processes and further provides incentives for academic collaboration at all stages of the research cycle. The Self-Journal of Science (SJS: http://sjscience.org) is this free agora where articles can be shared, reviewed and discussed openly and transparently.
Unfortunately, as you correctly put it, the infrastructure is not enough. I endorse your suggestions and really hope we start discussing them seriously as a community.
Hi David:
Excellent piece. It goes beyond the question of “how should we publish?” And takes on the related but different question of “how should we evaluate scientists?”
On the first point, I completely agree with you in questioning why we must have our work formally reviewed by peers either before or after publication. Generally careful authors will have subjected the manuscript to multiple revisions, incorporating feedback from all authors, but also a wider group of “readers.” Manuscripts in my laboratory spend an average of 6 months from first drafting of figures to submission, typically going through 20 versions of everything before I hit ‘submit.’ That’s why I find peer review to be such a nuisance. A trio of anonymous people demand that the paper be contorted into something that they would like to read, with questions that they care about. Often they are well aware that the demand list is really a wish list of experiments that can’t really be completed within a few years after the journal sends back a request for a revision. That’s why publishing on pre-print servers is such an exhilarating experience. I decide when the work is ready to share, I sign it, and vouch for it. Read my paper on biorxiv, or don’t. It’s really up to you. If you find it useful for your work or how you think about a problem, that’s wonderful. If you have reason to believe that I made a mistake, by all means tell me about it.
On the second point, I agree that committees do a cursory read of applications, generally using the CV journal names as proxy. I am just as guilty as the next person. Recently I have been doing exactly what you say. Go right to the research proposal and read that and the papers before going back to figure it if he is a he or she is a she, and what his/her pedigree is. The letters are so monotonously similar, I wonder how many are generated by a script on Google Recommend that allows the user to input a name and toggle the level of enthusiasm on a 10 point scale, with most people choosing values between 9.89 and 10.00.
But I think our immediate job at #ASAPbio is to address how we share our work, how we evaluate the people follows quickly thereafter.
Leslie
@DavidStern: “Fix the incentive structure and the preprints will follow.” Hear, hear!