Document 6: Additional Questions for Possible Consideration

Drafted by ASAPbio

How can funders help to validate preprints as a mechanism for communication?

In a Commentary in Science published on May 20, 2016, co-authors representing several funding agencies recommended:

1) Publishing an explicit statement encouraging researchers to make early versions of their manuscripts available through acceptable preprint repositories.

2) Permitting the citation of preprints in acceptable repositories in grant proposals as evidence of productivity, research progress and/or preliminary work.

3) Providing guidance to reviewers on how to assess preprints in grant proposals.

How do funders envision taking these recommendations forward within their own agencies? Can ASAPbio assist in those efforts by working with scientific societies, institutions, journals, and advocacy groups?

Special considerations for human research?

Are there special limitations or concerns regarding preprints and human research that should be taken into consideration for a funder-supported core preprint service?

Gathering data on preprint usage?

Currently, the effectiveness and potential pitfalls in how we communicate and evaluate scientific findings are mostly opinion rather than data-driven. Might funders wish to gather data concerning preprint servers (or compare work going to preprint servers and journals)?   Do preprints servers facilitate the transmission of irreproducible work?  Or do preprints reduce the appearance of problematic journal publications? Do scientist submit lower quality work to preprint servers or is the content similar to journal submission?  Is transmission of “pseudo-science” a problem in reality? Do grant committees find preprints useful or burdensome? Are there additional questions that could be informed by data?

Managing Quality Control?

What kind of quality control would funders like to see (ie, preventing pseudoscience)? How possible is it to ensure uniform quality control on multiple servers? Must all screening be done manually by members of the community, or could algorithms be useful as a mechanism of prioritization of human screening (based upon ORCID numbers, grant support, prior publications, etc.)? Do funders want additional quality control provisions (e.g. e-signing by the submitting author of agreements of authorship and ethical standards of data gathering?).  Would acknowledgment and linkages of submitted work to grant support help to solidify the credibility of submitted work?  Should the service remove or flag a preprint that had been shown through subsequent review to contain incorrect or falsified data? How important is QC for preprints?  Which features of QC should to be implemented now, and which could be approached in the future?