Data Analytic Strategy

EB Emma Bruehlman-Senecal
CH Cayce J Hook
JP Jennifer H Pfeifer
CF Caroline FitzGerald
BD Brittany Davis
KD Kevin L Delucchi
JH Jana Haritatos
DR Danielle E Ramo
request Request a Protocol
ask Ask a question
Favorite

We descriptively compared the engagement of the experimental group (weeks 0-4) to that of the control (weeks 4-8) across each group’s first 4 respective weeks of app exposure. As the distributions of the three engagement variables were highly positively skewed, we report median engagement metrics with their IQRs.

Analyses of all outcome variables were performed using an intention-to-treat approach, which included all available data from participants randomly assigned to the experimental and control groups. We took a two-step approach to these analyses, reflecting our two main lines of inquiry. In step 1, we tested the primary and secondary hypotheses that the experimental group would report lower loneliness, and other indicators of better mental health and college adjustment at the end of treatment (week 4) as compared to the control group. In step 2, we tested the hypothesis that treatment benefits would be more pronounced for participants with heightened psychological vulnerability at baseline.

Step 1 evaluated condition differences in outcomes at the end of treatment (week 4). Because missing data at week 4 was minimal (213/221, 96.4% of the sample provided full data on all outcome variables), we opted for a straightforward analytic approach that compared the means of the experimental and control groups on each outcome at week 4, adjusting for each outcome’s respective baseline value. A separate analysis of covariance was conducted for each outcome, and each model was evaluated on the basis of the statistical significance (P<.05) of the condition term (1=experimental; 0=control). Two outcomes, social adjustment to college and perceived social support, were not measured at baseline, because participants had not yet had enough social experiences on campus to meaningfully answer survey questions. Thus, models for these two outcomes omit baseline scores as a covariate.

Step 2 added an interaction term between baseline vulnerability and condition, allowing us to evaluate whether the benefits of Nod were more pronounced for more vulnerable students. The model of loneliness at week 4 included four predictors: condition, baseline loneliness, baseline depression, and a condition-by-baseline depression interaction term to capture baseline vulnerability. In modeling all other outcomes, models included four predictors: condition, baseline loneliness, baseline score on the outcome variable, and a condition-by-baseline loneliness interaction term. We selected depression as the baseline moderator of week-4 loneliness, and loneliness as the baseline moderator of week-4 depression and all other outcomes, given previous research demonstrating a strong bivariate and reciprocal relationship between loneliness and depression [8,12,78], including in first-year college students [78], and a strong relationship at baseline in this study (r=0.52). To determine whether Nod differentially benefitted vulnerable participants, each model was evaluated on the basis of the statistical significance (P<.05) of the interaction term.

To validate the results, we separately modeled comparisons between outcomes in the control group at week 8 to outcomes in the experimental group at week 4 (Multimedia Appendix 5).

To explore whether greater engagement with Nod was associated with greater improvement in outcomes, we report correlations between our three measures of engagement and within-participant change in each outcome variable from week 0 to 4 within the experimental group. Due to the skewed distribution of the engagement variables, we report nonparametric (ie, Spearman ρ) correlations. Social support and social adjustment to college, which were not measured at baseline, are excluded from these analyses.

Within the experimental group, the percentage of users endorsing each desirability statement was reported. Open-ended feedback was analyzed by a single coder using a general inductive approach [79]. Core questions guiding the coding included, “What do students like about Nod?” “What do they wish would change?” and “Based on participant feedback, what factors might improve user experience and engagement with the app?” Quotes were selected to exemplify prominent themes. To validate results, we report quantitative comparisons between user experience of the control group at week 8 to the experimental group at week 4 (Multimedia Appendix 5).

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A