Methods for analysis

JD Josie Dickerson
PB Philippa K. Bird
RM Rosemary R. C. McEachan
KP Kate E. Pickett
DW Dagmar Waiblinger
EU Eleonora Uphoff
DM Dan Mason
MB Maria Bryant
TB Tracey Bywater
CB Claudine Bowyer-Crane
PS Pinki Sahota
NS Neil Small
MH Michaela Howell
GT Gill Thornton
MA Melanie Astin
DL Debbie A. Lawlor
JW John Wright
request Request a Protocol
ask Ask a question
Favorite

The BiBBS experimental cohort design will enable evaluation of the BSB interventions using both experimental and quasi-experimental methods, in addition to traditional epidemiological approaches to the analysis of observational cohort data.

The BiBBS cohort will form a platform to assess the effectiveness of a selection of BSB interventions using a randomised controlled Trials within Cohorts design (TwiCs; also called the cohort multiple randomized controlled trial design) [15]. TwiCs are randomised controlled trials that are implemented within cohort study samples, with regular outcome measurement as part of the cohort data collection. BiBBS participants will be asked to provide consent to be part of a TwiCs study during cohort recruitment. In this case, routinely collected health record data will be used as outcome measurements.

Eligible participants for each intervention chosen for inclusion in a TwiCs evaluation will be identified from the cohort sample. A group will be randomly selected to receive the intervention, and their outcomes will be compared with eligible participants who were not randomly selected. The process can be repeated many times within a cohort, such that a cohort study hosts multiple TwiCs [15].

It is planned that the cohort will host at least three TwiCs evaluations. A rapid consensus exercise has been conducted to identify possible interventions to undergo TwiCs evaluation, based on the current evidence base (i.e. filling a need for generating evidence and not duplicating existing evidence), and on ethical and logistical grounds. Interventions commissioned by BSB may not be withheld from families; however capacity issues may result in some families not receiving an intervention or having to wait to take part. In this context, random selection provides an ethical approach to selecting who takes part. Final decision on eligibility for TwiCs will be made once interventions have been implemented and capacity issues have been assessed. Separate protocols will be prepared for each TwiCs evaluation.

For most of the BSB interventions, random allocation of families will not be possible, due to ethical and logistical constraints. Quasi-experimental methods will be employed to estimate the causal effects of these BSB interventions. We will consider a range of methods, including propensity scores, regression discontinuity and instrumental variables. Quasi-experimental methods are recommended to evaluate interventions or policy changes in ‘real-world’ circumstances where researchers are not able to manipulate which families receive an intervention [12].

Propensity score approaches can be employed for all interventions that have been taken up by a group within the cohort in order to weight or match a balanced control group. Propensity scores (representing the predicted probability that an individual or family will take part in an intervention, given their baseline characteristics) will be calculated using data collected at baseline. The outcomes for the two groups can then be compared. This approach will allow selection bias to be minimised in the analysis, and the causal effects of interventions to be inferred [23, 24]. For example, for participants of an intervention to support women with mental health problems, propensity scores could be calculated to match a control group of women with similar baseline characteristics including depression and anxiety screening scores, socioeconomic status and ethnicity. Outcomes for women who have taken part in the intervention can then be compared with the matched control group to estimate the intervention effects, for example on maternal mental health, mother-child attachment and child development. However, where factors that predict take-up of interventions have not been identified or accurately measured at baseline, there will be residual differences between groups and remaining concerns about selection biases.

Regression discontinuity designs can be used for projects that have an eligibility cut-off for individuals/families, based on a continuous assignment variable (e.g. low BMI, age 19 or lower) [25, 26]. Regression discontinuity approaches will model child outcomes on the assignment variable, to assess whether there is a gap in outcomes (discontinuity) at the eligibility cut-off. Where possible, analysis will be restricted to families/individuals whose assignment variable scores are close to the cut-off, where the intervention can be thought of as randomly assigned (especially if there is measurement error for the continuous variable). The difference in mean outcomes in the groups just above and below the cut-off is the average causal effect. For example, for an intervention targeting teenage pregnancies age at conception can be used as the assignment variable. Outcomes for women either side of the cut off (i.e. age 20) can be compared to estimate the effect of the intervention, for example on breastfeeding initiation or mother-child attachment.

We will consider both the effects of single interventions, and the cumulative effects of stacked (multiple) interventions for pregnant women and children in early life. Pathways through the 22 interventions will be identified for target groups (e.g. pregnant teenagers, women with mental health issues, women with no English language skills) and key outcome domains (see Table 3). We will analyse the effect of attending a single intervention and of attending a pathway of stacked interventions. The BSB programme offers a unique opportunity to complete a novel and pragmatic analysis of the impact of stacked interventions for at risk groups of mothers and children.

The cohort will also facilitate on-going quality improvement for BSB interventions. On-going measurement and monitoring of intervention uptake and outcomes will enable learning and adaptation of interventions to ensure that they are widely used and effective. A separate protocol will be written for the process evaluation, which will follow Medical Research Council (MRC) guidance [27], incorporating the Conceptual Framework for Implementation Fidelity [28].

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A