The collected data were pre-processed to remove: all trials on which participants failed to make a response (0.1% of trials); all trials on which responses were made prior to target appearance (0.6% of trials); and all trials on which response times were faster than 200 ms (0.3% of trials). The latter exclusion criterion was determined by prior experience with RT data suggesting that responses faster than 200 ms tend to be anticipatory responses unrelated to target processing, as well as application of a generalized additive model of trial accuracies predicted by trial RTs, showing that only above about 200 ms do responses rise above chance performance.
Bayesian inference was achieved using the Stan (Stan Development Team, 2017) probabilistic programming language via the RStan package for R (R Core Team, 2017). Response time and accuracy from both subtests were modeled simultaneously, where trial-by-trial accuracy was modeled as a binomial event and trial-by-trial response time was modeled as having log-normal measurement noise. Within a given participant, the influence of the manipulated variables on accuracy was modeled as affecting the log-odds of error while their influence on the response time was modeled as affecting the log-mean response time; the scale of the log-normal measurement noise was also modeled for each participant. The full set of coefficients relating a given participant to their trial-level data was modeled as varying across participants through a multivariate normal distribution in a hierarchical model that sought inference on the population-level coefficient means, variabilities, and correlations. Notably, as compared to more traditional approaches to data analysis (e.g., ANOVA) that would employ independent analyses of response time and accuracy data, by modeling the response time and accuracy data in the same model, we achieve more accurate and informed inference on their associated coefficients at both the participant and population level to the degree that there are correlations among them manifest in the population, which is a strong expectation for these measures (for example, slower participants tend to be more accurate; participants with larger flanker effects on response time tend to have larger flanker effects on response accuracy). In the terminology of De Boek and Minjeong (De Boeck and Jeon, 2019), this is a joint hierarchical model and reflects an approach to the analysis of timed tests that is now relatively common in the psychometric literature (Van Breukelen, 2005; van der Linden, 2007; Loeys et al., 2011) but has yet to see widespread adoption in cognitive psychology (c.f., Molenaar et al., 2015). Independent and weakly informed priors were used for all population-level parameters.
Data, analysis code, and summary tables of response times by task are available online via the Open Science Framework (OSF) website (Rainham and Lawrence, 2019).
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.
Tips for asking effective questions
+ Description
Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.