Completeness of reporting before and after using a GoodReports reporting checklist

CS Caroline Struthers
JH James Harwood
JB Jennifer Anne de Beyer
PD Paula Dhiman
PL Patricia Logullo
MS Michael Schlüssel
request Request a Protocol
ask Ask a question
Favorite

We reviewed whether authors that used and submitted a reporting guideline checklist from GoodReports.org had changed their manuscript and improved the completeness of their reporting as a result.

We started with the subset of manuscripts from the study on submission rates that had:

been checked by Penelope.ai before submission to BMJ Open,

not withdrawn their submission from BMJ Open, and

included a reporting guideline checklist from GoodReports.org.

We conducted a descriptive before-and-after study on the included manuscripts. Completeness of reporting was described in the version that was uploaded to Penelope.ai for an automatic pre-submission check (the “before” version) and in the version subsequently submitted to BMJ Open (the “after” version).

Assessors checked whether the “after” version submitted to BMJ Open contained the same information as the “before” version or whether the author had added information.

We excluded manuscripts submitted with checklists obtained elsewhere, such as the EQUATOR Network website or the journal website, to reduce the chance that the authors of manuscripts in our “before” group had already used a checklist before visiting GoodReports.org.

JH redacted the title and methods sections of the “before” and “after” versions so that no personal information was shared with assessors. The “before” versions were all in .docx format, so text could be copied and pasted into a fresh Microsoft Word file. The “after” versions were PDFs as BMJ Open automatically converts submissions into PDF and adds watermarks, line numbers, and footers. JH split PDF files into smaller files containing only the title and methods sections for data extraction. These differing file formats meant that assessors were not blinded to whether the manuscript was the “before” or “after” version.

Five assessors (JdB, MS, PD, PL, and AK) were allocated a selection of manuscript pairs and assessed the methods sections of the “before” and “after” versions. Each manuscript pair was assessed by three data extractors. CS assessed the titles of all 20 manuscripts.

The assessors checked whether the “before” version submitted to Penelope.ai contained adequate information for each item in the methods section of the appropriate reporting checklist. Each item was assessed as present, absent, unclear/partial, or not applicable to that manuscript.

The assessors then checked the “after” version for any added information. Each item was assessed as “no change” or “added information”. As each reporting guideline has a different number of items, we report the counts as percentages.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A