Transparency, Recognition and Innovation in Peer Review in the Life Sciences

HHMI Headquarters
4000 Jones Bridge Road. Chevy Chase, MD 20815-6789

Items marked with ▶ will be webcast. Join the conversation with #bioPeerReview

Live collaborative notes (Google Doc)

Wednesday, February 7, 2018

5:00 pm                     Reception, Great Hall

6:00 pm                     Dinner, Dining Room

7:00 pm                     ▶ Opening Remarks, Auditorium
.                                        Welcome to HHMI (Erin O’Shea)
.                                        Meeting Objectives (Ron Vale) – slides

7:20 pm                     ▶ Keynotes
.                                        Erin O’Shea (HHMI) – slides
.                                        Jeremy Berg (Science) – slides
.                                        Mike Lauer (NIH) – slides

8:20 pm                     ▶ Joint Q&A

8:45 pm-11 pm           Social at the HHMI Pub, Pilot Lounge
.                                     Final shuttle to hotel departs at 11:15 pm

Thursday,  February 8, 2018

7:00 am                     Breakfast, Dining Room

8:30 am                    ▶ Transparency and recognition of peer review
.                                         Moderated by Boyana Konforti

Should peer review reports (with or without reviewer identities) become part of the public scholarly record that is linked to a scientific work? We will discuss whether such transparency is desirable from the viewpoint of scientists (both as authors and reviewers), funding agencies, and journals. If transparent peer review is desirable, how can it be implemented and what might be barriers to adoption?

8:30 am                     ▶ Short plenary talks (5-10 min each), Auditorium

  • Tony Ross-Hellauer (KnowCenter) – slides
    Open Peer Review – Researcher Attitudes and Next Steps
  • Rebecca Lawrence (F1000) – slides
    F1000: Our experiences with preprints followed by formal post-publication peer review
  • Theo Bloom (BMJ) – slides
    Open peer review at BMJ: What we know and what we don’t
  • Joyce Backus (NIH NLM) – slides
    MEDLINE and PMC – Role of journal peer review in journal evaluation.
  • Jennifer Lin (Crossref) – slides
    Peer Review Metadata: Provisioning it to systems across the research enterprise
  • Andrew Preston (Publons) – slides
    Publons: Recognizing review and the challenges we’ve faced along the way.
  • Kaf Dzirasa (Duke)
    Researcher perspective
  • Natalie Ahn (University of Colorado, Boulder)
    Researcher perspective

9:30 am                     Breakout sessions
.                                   Coffee and refreshments will be available throughout

Each topic will have 2 groups of ~15 preassigned participants. A facilitator and note-taker have been appointed for each. In order to facilitate free discussion during the breakout sessions, please adhere to the Chatham House Rule (ie, information may be shared, but not attributed to the speaker) for this portion of the meeting.

Peer review is a time-intensive process that forms the cornerstone of readers’ trust in published papers, yet it remains hidden from public view at the overwhelming majority of journals. Making peer review reports open (ie, publishing the content of peer reviews, with or without the names of reviewers) could help to encourage reviewers to be more careful and constructive in their reviews; help readers to contextualize the paper and make use of the valuable ideas, criticisms, and scholarly contributions offered by peer reviewers; facilitate the recognition of peer review as a valued scholarly contribution in grant applications and in hiring and promotion; permit better systemic analysis of our peer review system.

However, opening peer review reports may also produce undesirable effects. It has been suggested that it could reduce the candor with which researchers critique papers, especially those authored by established scientists. It also would increase the total amount of information available, which some find concerning as there is already “too much to read.” In addition, it could affect the willingness of some researchers to perform peer review at all.

What are the benefits and risks of making peer review transparent? Will it improve the quality of peer review? Should referees receive credit? We would like to examine these issues from the perspective of multiple stakeholders, and ask the breakout sessions to report back on the four issues described below.

Breakout Topic 1. Scientist perspective (Notes from groups A and B)

  1. What are the benefits and risks of making peer review reports open?
  2. Do you think that open reports would improve the the quality of peer review? Would open reports contain information of value to the community?
  3. For the risks that you have identified, how could they be mitigated?
  4. Should scientists receive credit or recognition for peer review activities? Should the names of all scientists (including students and postdocs) who contribute to peer review be disclosed to the journal as standard practice?

Breakout Topic 2. Technology and journal perspective (Notes from groups A and B)

  1. What are the benefits and risks of making peer review reports open?
  2. Will reviewers become harder to find in an open system, and how could this be mitigated? Are there other reasons why it would make journal operation more difficult?
  3. How can reviewers be credited for their peer review? What credit systems could be developed if their identities are not publicly revealed?
  4. How can peer reviews be made discoverable? Should all peer review reports receive a DOI?

Breakout Topic 3. Funder and university perspective (Notes from groups A and B)

  1. What are the benefits and risks of making peer review reports open?
  2. Would transparent peer review affect the rigor and reproducibility of scientific work?
  3. How could one evaluate the effects of various forms of transparency (open reports, open identities, open participation, etc) in peer review?
  4. Should scientists be recognized for good peer review – in funding and promotion/tenure applications – and if so by what mechanism?
  5. By what means can the quality as well as the quantity of peer review be measured?

10:45 am                     Break, Great Hall

11:00 am                     ▶ Reports from the breakout groups and discussion, Auditorium
.                                           Moderated by Bodo Stern

12:30 pm                     Private online voting

In order to measure the degree of consensus on key issues discussed in the morning session such as open reports, peer reviewer identity, and receiving credit for peer review, we will invite all participants (in person and remote) to indicate their opinions through an online form. These will be discussed in aggregate after lunch.

12:45 pm                     Lunch, Dining Room

1:45 pm                     Open Discussion on the Morning Session
.                                         Moderated by Robert Kiley
.                                         Results of private voting and open discussion of next steps

2:15 pm                      ▶ Innovation in peer review
.                                         Moderated by Jessica Polka

What are new ideas for peer review models and how can they be piloted and evaluated? What other evaluation metrics might be considered?

2:15 pm                     ▶ Short plenary talks (5-10 min each), Auditorium

  • Prachee Avasthi (Kansas University Medical Center) – slides
    Providing peer review training and preprint feedback through preprint journal clubs.
  • Andrew McCallum (U Mass Amherst) – slides
    OpenReview.net: Five years of open peer review experience in computer science
  • Mike Eisen (UC Berkeley/HHMI) – slides
    Appraise (A Post-Publication Review and Assessment In Science Experiment)
  • Ron Vale (UCSF/HHMI/ASAPbio) – slides
    Peer Feedback
  • Bodo Stern (HHMI) – slides
    Article-specific tags

3:15 pm                     Break, Great Hall

3:30 pm                     Breakout Sessions
.                                 Coffee and refreshments will be available throughout

Each topic will have 2 groups of ~15 preassigned participants. A facilitator and note-taker have been appointed for each. In order to facilitate free discussion during the breakout sessions, please adhere to the Chatham House Rule (ie, information may be shared, but not attributed to the speaker) for this portion of the meeting.

The internet has brought many new ways of communicating opinions, ratings, and reviews, which could hypothetically be brought to bear on peer review. Our breakout sessions will focus on ideas for broadening the scope of peer review, new ways of evaluating the merit of scientific articles, and issues surrounding the future of expert-based (traditional) peer review. This session is intended to facilitate brainstorming.

Breakout Topic 1: Open participation models (Notes from groups A and B)
Many papers are interesting to far more people than the handful of peer reviewers who evaluate it. Indeed, as evidenced by the prevalence of retracted papers, those peer reviewers sometimes miss important problems or insights that are sometimes caught later by others. Opening up the peer review process to include any interested reader could help to diversify and strengthen evaluation. However, utilization of scientific commenting platforms (e.g. PubMed Commons), especially those prohibiting pseudonyms, is typically low. Furthermore, comment sections around the web sometimes degenerate into unproductive and uncivil discourse.

  1. What are the strengths and weaknesses of open participation systems such as commenting platforms?
  2. How can constructive commentary be incentivized? What behaviors (positive and negative) does anonymity enable?
  3. How can these systems be effectively moderated?
  4. How can open commentary be integrated into a formal peer review process in a productive way? Should preprint comments may be taken into account in editorial decisions?

Breakout Topic 2: Improving traditional peer review and reducing reviewer burden. (Notes from groups A and B)
The scientific community produces peer reviews in a process orchestrated by journal editors. Our current system of peer review merges technical evaluation of a paper with the process of curating interesting work into appropriate journals. Casting the peer reviewer in a gatekeeping role can affect the constructiveness of feedback given to authors. Furthermore, in the process of serially submitting a paper to different journals, reviewer effort can be duplicated unnecessarily. Several platforms (Axios, Peerage of Science, Peer Community In…) are attempting to address this problem and some consortia of journals (NPRC, eLife/PLOS/EMBO/BMC) share peer review reports and reviewer identities. Overall, these practices have not become widespread. What efforts could increase the efficiency and quality of peer review?

Should all papers be sent for expert peer review or just a subset of those that have greater interest? This would reduce reviewer burden but could exacerbate inequalities. Please discuss the pros and cons and your conclusion.

  1. What benefits can be gained from sharing peer reviews between journals? What are the potential downsides? Why is this process of sharing peer review reports not more common, and should it be encouraged?
  2. Should more attention be paid to the scientific data in peer review or is the present system adequate?
  3. System-wide, have we struck the right balance of evaluating data and data presentation versus impact and journal suitability? Do we need better training methods in peer review?
  4. The internet potentially permits conversations between reviewers, reviewers and editors, and potentially (although untested) between reviewers and authors. eLife, EMBO J and others are experimenting in this space (consultative peer review). How effective are these practices, should they become more widespread, and how could they be evaluated?

Breakout Topic 3: New ways of evaluating papers (Notes from groups A and B)
Journal publication confers a readily-understood “seal of approval” on a paper, with the brand names of various journals conveying information about what groups of researchers are likely to find it interesting and its perceived importance in the field. DORA has highlighted the need to assess papers by their content rather than journal-based metrics, which is a practical approach especially for small groups of people assessing relatively small volumes of papers. After all, as scientists read the published literature through the course of their work and replicate or build upon it experimentally, they form their own opinions of the paper. These opinions can change over time and differ drastically among individuals or fields. While these attitudes are shared informally within groups of researchers, they are not recorded or shared in readily-accessible ways. Please consider the following topics.

  1. Is journal hierarchy sufficient for evaluating the worth of a scientific contribution or at least the best proposal for now?
  2. What new ideas for evaluating scientific work that you have heard at the meeting, or read about, merit serious consideration?
  3. Whether at the time of publication or after, what systems could contribute to our understanding of the merit of a scientific work? Should this be algorithms? Crowd-sourcing? Experts?
  4. Should journals curate or rate papers in other ways, beyond the binary decision to publish?

4:45 pm                     Break, Great Hall

5:00 pm                     ▶ Reports from the breakout groups and discussion, Auditorium
.                                         Moderated by Carol Greider

6:15 pm                     ▶ Open discussion on the afternoon session
.                                         Moderated by Ron Vale

6:55 pm                     ▶ DORA announcement
.                                         Stephen Curry

7:00 pm                     Dinner, Dining Room

8:30 pm – 11 pm       Social at the HHMI Pub, Pilot Lounge
.                                   Final shuttle to hotel departs at 11:15 pm

Friday, February 9, 2018

Events on this day will not be webcast.

7:00 am                     Breakfast, Dining Room

8:30 am                    Participant-driven discussions on implementation and next steps

Participants will propose and join breakout sessions to discuss potential implementation projects of their design (Google Doc). Groups will report back for additional feedback and discussion.

8:30 am                     Project introductions, Auditorium

9:00 am                     Breakout sessions
.                                   (Coffee and refreshments will be available throughout)

10:45 am                   Reports from the breakout groups and discussion, Auditorium

11:30 am                    Lunch and departures, Dining Room