We used classification analyses to measure the extent to which brain activity could predict task condition and the color and shape of the stimuli on each trial. For every classification, we repeated the analysis for each time sample to capture how the information carried by the neural response changed over time: We trained classifiers to discriminate between two categories of trial and tested on held-out data. We report results obtained with a linear support vector machine classifier, using the MATLAB function fitcsvm with KernelFunction set to linear. We also repeated our analyses with a linear discriminant analysis using the MATLAB function classify with type of diagLinear and obtained very similar results (not shown).
For each classification, we created “pseudotrials” by averaging across trials with the same value on the dimension of interest, but with differing values along other dimensions. We used pseudotrials to increase signal-to-noise ratio along the dimension of interest (e.g., see Guggenmos, Sterzer, & Cichy, 2018; Grootswagers, Wardle, & Carlson, 2017). When training classifiers to discriminate object color and shape, we trained and tested within a single-task condition (e.g., attend left, report color), comprising two blocks (512 trials). We trained classifiers separately on each pair of the four levels along each feature dimension, at each object location, using pseudo-trials to balance across irrelevant dimensions. For example, when classifying “strongly green” versus “weakly green” objects on the left of fixation, there were 128 “strongly green” and 128 “weakly green” trials. For classifying left object color, we defined pseudotrials that were balanced across left object shape and right object color and shape (four levels each). Because balancing across all three of these irrelevant dimensions would require 4 × 4 × 4 = 64 trials per pseudotrial, yielding only two pseudotrials per category, we instead balanced across two of three irrelevant dimensions, using 4 × 4 = 16 trials per pseudotrial, and randomized across the third (allowing eight pseudotrials per category). For each pair of irrelevant feature dimensions, we generated 100 sets of the pseudotrials, each with a different randomization. Repeating this process 3 times, balancing across different pairs of irrelevant features, gave us 300 sets of pseudotrials in total. For each of set of pseudotrials, we trained a classifier using seven of the eight pseudotrials in each condition and tested using the remaining pair of trials, repeating 8 times, averaging classifier performance across these.
For each feature dimension (color and shape), the four feature values gave six pairwise classifications, which we grouped according to the feature difference between the pair. When considering the effects of spatial and feature-selective attention across feature difference, we grouped classification pairs according to whether they were one (three pairs), two (two pairs), or three (one pair) steps apart along their feature dimension and averaged across classifications within each group.
To summarize the effects of spatial attention (SpatAtt) and feature-selective attention (FeatAtt), we used the following metrics, based on classifier performance (d′) in the attended location, attended feature (aLaF) condition; the attended location, unattended feature (aLuF) condition; the unattended location, attended feature (uLaF) condition; and the unattended location, unattended feature (uLuF) condition.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.