We assessed the reliability/internal consistency of a task score through within-task correlations and the convergent/discriminant validity through between-task correlations. Here, internal consistency indicates how strongly different scores of a given task are related to each other and whether they represent the same (core) construct [8], quantifying within-occasion reliability. If scores are measuring a single construct, they should lead to more homogenous results and therefore higher internal consistency; conversely, the scores might be measuring more than one construct [8]. Additionally, variables thought to reflect similar constructs would be expected to be rather closely correlated to each other, indicating convergent validity; in contrast, measures reflecting unrelated constructs should not correlate with each other, revealing discriminant validity [10]. In other words, construct validity is supported when correlations between different task scores are high for the same (or a similar) trait but low for different traits. We computed Spearman rank correlation coefficients between task scores across individuals, including all Studies. Only scores related to RT, slope of RT and ANT-specific measures were considered for this analysis–we did not included accuracy- or variability-related measures in this analysis because of the lack of test-retest availability described in the section 3.2.; however, slopes of RT were included in this analysis because of its relevance for evaluating performance over time. P-values were adjusted for multiple comparisons by the False Discovery Rate (FDR) at the level of each score.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.