Demographic data (age, sex, BMI, diagnosis, years since diagnosis, physical activity, and medication) and the primary feasibility outcomes were reported using narrative and descriptive statistics. For attrition rate, defined as the percentage of dropouts, a 10% drop out rate was deemed acceptable. Adherence rate was calculated as the number of completed training sessions in relation to the scheduled sessions, with an adherence rate of 80% being considered acceptable [42]. Results from the satisfaction questionnaire were given as percentage of agreement with the single items. For the total TAM score, all item scores were summated. Additionally, the four subscales of the TAM were scored using the mean value of the respective item responses. Thereafter, mean, standard deviation and minimum/maximum values of the total score and the subscales were calculated.
For the secondary outcomes, descriptive statistics (mean and standard deviation for interval data and median and inter-quartile range for ordinal data) and existing floor or ceiling effects (more than 15% of the participants achieve the lowest or highest possible score) [43] were reported. The normality of the data was evaluated using the Shapiro–Wilk test. Depending on whether data were considered normally, or non-normally distributed, paired t-tests or Wilcoxon signed-rank tests were used to compare pre- and post-intervention measurements. Only data from participants who adhered to the training protocol and did not drop out were analysed (per protocol analysis). Because the focus of statistical analysis of secondary outcomes is not on significance, effect sizes (ES) were calculated for within-group differences. Since effect size depends on the dispersion, which gets artificially low when the ceiling effect is high, thus resulting in a falsely high ES, we only calculated ES for assessments without ceiling effects at baseline. For non-normal data, it is expressed as r = z/√N, where z is the approximation of the observed difference in terms of the standard normal distribution and N is the total number of samples. To enable interpretation, the following classification was assumed: 0.1 for small effect, 0.3 for medium effect, and 0.5 for large effect [44]. For normal data, Cohen’s d was calculated as follows: pre-post ES = (post-test mean – pre-test mean)/pre-test standard deviation. Here, 0.2 indicated a small, 0.5 a medium and 0.8 a large effect [45, 46]. IBM SPSS Version 25.0 software (SPSS, Inc., Chicago, IL, United States) was used for data analysis.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.