The first phase of analysis involved identifying factors influencing implementation, including factors specific to fidelity. In the second phase, data on fidelity were analysed. Findings from both phases were then integrated. The analysis was performed at two levels: within each case and then across the cases (56, 58).
Using NVivo 11 software for data management, framework analysis (59, 60) was performed, with the CFIR as the a priori framework. Thematic analysis was used where data did not fit within the framework (61), and new constructs were added to the codebook. See constructs denoted with an asterisks in Supplementary Material 2. Two researchers (CKer and SMH) independently coded data from one hospital to check for coding consistency and to modify CFIR construct definitions where necessary. For example, the construct Patient Needs & Resources in the Outer Setting domain was modified to Consumer Needs & Resources to reflect that this was a staff facing intervention (see Supplementary Material 2). Once agreed, the lead investigator (CKer) analysed data from the remaining hospitals (see Supplementary Material 2 for final codebook). Using causation coding (61), relationships between constructs were also identified by the lead investigator (CKer) based on interview and observational data for each hospital and were checked independently by another researcher (CR). A consensus-building coding process was adopted throughout the analysis (61).
To assess fidelity, descriptive statistics were calculated for each hospital using data from structured observations. This included scores per policy condition for each menu and a total score for each menu. An average score across the menus for each hospital was then calculated. Finally, hospitals were categorised into high (average score >4) and low (average score ≤ 4) levels of implementation fidelity by applying cut-off points. Similar to other research (62), these cut-off points were based on minimally accepted practises following discussion and consensus within the research group.
These data were supplemented with other sources of evidence relating to implementation fidelity, not specified in the protocol (55). Information on fidelity was generated during semi-structured interviews and unstructured observations without formal assessment or questioning. Data were analysed thematically to further explore implementation fidelity at each hospital (61).
A triangulation protocol was used to integrate fidelity findings from four sources (i.e. HSE progress reports, structured observations, semi-structured interviews and unstructured observations) (63–65). A “convergence coding matrix” was created with independent findings (as rows) mapped across the data sources (columns). The relationships between data was categorised as: (1) silence (only one data source out of the two being compared contained data on a particular finding), (2) dissonance (conflicting findings in the data), (3) partial agreement (complementarity between data) or (4) agreement (convergence in the data) (65). The triangulation process facilitated the identification of meta-themes that cut across the four sources of fidelity data (63, 64). In line with best practise (63, 64), multiple researchers (CKer, SMH and ET) with combined expertise in qualitative and quantitative methods worked together during triangulation.
Initially, as per the protocol and purposive sampling strategy, the study set out to identify determinants of implementation success by comparing hospitals with high and low levels of fidelity based on the progress reports (55). However, the integrated analysis of fidelity from multiple sources called into question this dichotomy. As a result, the cross-case analysis focused on identifying patterns of fidelity and influencing factors across hospitals. For the cross-case analysis, data were analysed for each hospital first, as outlined above (i.e., examining fidelity per hospital, examining CFIR constructs per hospital). Then findings were compared across hospitals using matrices (displaying results in rows and hospitals in columns). Quantitative and qualitative data were integrated using joint displays to identify meaningful similarities, differences and case-specific experiences (50). In a deviation from the protocol (55), the perspectives of different stakeholders were not compared through formal analysis; however, the type of stakeholder was taken into account when presenting commonly cited influencing factors.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.
Tips for asking effective questions
+ Description
Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.