To ensure the selected items represented a consensus view, we undertook a modified Delphi study (Hasson et al., 2000). The Delphi panel was formed of patients, carers and family members (up to N = 25) recruited from the REDUCE programme patient and public involvement group, other diabetes networks and with snowball sampling in November 2020. HCPs (up to N = 25) were recruited from current networks and contacts of the REDUCE researchers, and from HCPs who had registered an interest in the REDUCE programme, as well as being promoted via the REDUCE_DFU Twitter account. An invitation email was sent with a link to an online survey and consent forms.
Participants were asked to rate each item on a 9-point Likert scale according to the level of importance of the item for day-to-day management of DFUs, 1 indicated ‘not important’ and 9 indicated ‘very important’ (Akins et al., 2005; Vogel et al., 2019). Further space was provided alongside each item to enable comment (Stewart et al., 2017).
The second round required participants to rate the items where no consensus was reached in round one and rate any items that were added in the free-text responses. Given the resources available and further considerations regarding participants (e.g. participant fatigue), not more than three rounds were planned if consensus was not reached upon the remaining items (Keeney et al., 2006).
A formal sample size calculation was not required for this study. The Delphi method is a widely accepted research approach; however, there is no consensus upon the size of the panel required. A greater number of participants enhances the reliability of findings and reduces error (Keeney et al., 2011); however, Sekayi and Kennedy (2017) noted that numbers greater than 20–30 become unwieldly in an iterative process.
Descriptive statistical analysis was performed, including percentage of consensus upon each item, mean and standard deviation. Missing data on individual items were noted. Criteria for including items from round 1 into round 2 are reported in Table 1.
Criteria for inclusion of items into the next round of modified Delphi study.
To inform the consensus agreement of the Delphi method, we considered the level of agreement across participants and across rounds and documented the degree of stability in scores. To assess the inter-rater reliability between rounds, the intra-class correlation coefficient (ICC) was assessed for all items (two-way random effects model with ICC average measurement and absolute agreement between raters). This test is appropriate when we have multiple scores for a rater and wish to assess the agreement among raters (Koo and Li, 2016). An agreement of 0.75 or above was taken as an indication of good reliability, 0.5–0.75 as moderate and less than 0.5 as poor. A small change in scores was assumed to be a 1-point difference.
To compute the stability of the raters’ responses, Wilcoxon signed-rank tests were undertaken for the difference in responses between the two rounds. A p-value greater than 0.05 indicated no evidence that the results were different between the Delphi rounds.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.