Focal analyses

LB Linzy Bohn
YZ Yao Zheng
GM G. Peggy McFall
RD Roger A. Dixon
request Request a Protocol
ask Ask a question
Favorite

The 50 multi-morbidity items were submitted to an exploratory factor analysis. Importantly, we made decisions related to the number of factors (health domains) and which indicators to retain on the basis of best-practices literature [34, 35]. We verified that this latent structure fit the data using confirmatory factor analysis. Model fit was determined using standard indices (see Supplementary Methods, Additional File 1).

For the latent profile analysis (LPA), we fit a sequence of models with varying numbers of latent profiles (e.g., 1, 2, 3). We selected the best fitting model based on interpretability of the study findings, as well as the following model parameters, tests, and fit indices [36]: (a) log-likelihood value (LL), (b) number of parameters estimated, (c) Bayesian Information Criterion (BIC), (d) sample-size adjusted BIC (SABIC), (e) Akaike Information Criterion (AIC), (f) adjusted Lo-Mendell-Rubin likelihood ratio test (LMR-LRT), (g) adjusted Vuong-Lo-Mendell-Rubin likelihood ratio test (VLMR-LRT), and (h) entropy. Low values of BIC, SABIC, and AIC indicate better fit [10]. The LMR-LRT and VLMR-LRT compare the current model (k) against the model of one fewer latent profile (k-1); a non-significant p value supports the selection of the k-1 profile model [10]. Entropy (ranging between 0 and 1) is not used for model selection but suggests the classification accuracy (the higher the better).

To avoid local maxima, we used 5000 multiple starting values. Indicators were allowed to covary within class, while the variances-covariances were constrained to be equal across profiles (i.e., class invariant-unrestricted structure). Alternative models allowing free estimation of variance-covariance across profiles did not converge, suggesting over-parameterization [37]. We controlled for potential age effects by regressing the observed indicators and profile membership on age. An adapted formula for Cohen’s d was used to (a) calculate standardized mean differences across latent profiles in the observed indicators and (b) facilitate interpretations of the final latent-profile solution [36]. Values > 2.0 indicate a less than 20% overlap in profile-specific distributions and a high degree of separation on the associated indicator, whereas values < 0.85 indicate more than 50% overlap and a low degree of separation on the associated indicator.

We examined how the frailty profiles related to intercept (performance at a statistical centering age) and linear slope (longitudinal change) of neurocognitive speed using the manual BCH method (for further details, see [38, 39]). We tested whether latent profiles differed in the level or rate of change by comparing the nested models with constrained equal performance level (i.e., intercept) or decline in speed (i.e., linear slope) with the full model where performance level and decline in speed were freely estimated for each latent profile using χ2 tests. Significant differences were inferred from a -2LL difference statistic (D at p < .10), which compared the unconstrained model to the constrained model.

We tested whether membership in the frailty profiles was comparable across sex by performing a multinomial logistic regression using the R3step approach (for further details, see [40]). We examined whether frailty-cognition associations generalized across sex by regressing the intercept and slope of speed on sex separately for each profile.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A