fMRI experiment II: Classifying brain responses to emotional film clips
This protocol is extracted from research article:
Emotion schemas are embedded in the human visual system
Sci Adv, Jul 24, 2019; DOI: 10.1126/sciadv.aaw4358

fMRI data used for validating the model have been published previously; here, we briefly summarized the procedure. Full details can be found in the study of Kragel and LaBar (15).

Participants. We used the full sample (n = 32) from an archival dataset characterizing brain responses to emotional films and music clips. For this analysis, which focuses on visual processing, we used only brain responses to film stimuli (available at www.neurovault.org). These data comprise single-trial estimates of brain activity for stimuli used to evoke experiences that were rated as being emotionally neutral in addition to states of contentment, amusement, surprise, fear, anger, and sadness.

Experimental paradigm. Participants completed an emotion induction task where they were presented with an emotional stimulus and subsequently provided on-line self-reports of emotional experience. Each trial started with the presentation of either a film or music clip (mean duration, 2.2 min), immediately followed by a 23-item affect self-report scale lasting 1.9 min, followed by a 1.5-min washout clip to minimize carryover effects.

MRI data acquisition. Scanning was performed on a 3-T General Electric MR 750 system with gradients of 50 mT/m and an eight-channel head coil for parallel imaging (General Electric, Waukesha, WI, USA). High-resolution images were acquired using a 3D fast SPGR BRAVO pulse sequence: TR, 7.58 ms; TE, 2.936 ms; image matrix, 2562; α = 12°; voxel size, 1 mm × 1 mm × 1 mm; 206 contiguous slices. These structural images were aligned in the near-axial plane defined by the anterior and posterior commissures. Whole-brain functional images were acquired using a spiral-in pulse sequence with sensitivity encoding along the axial plane (TR, 2000 ms; TE, 30 ms; image matrix, 64 × 128; α = 70°; voxel size, 3.8 mm × 3.8 mm × 3.8 mm; 34 contiguous slices).

MRI preprocessing. fMRI data were preprocessed using SPM8 (www.fil.ion.ucl.ac.uk/spm). Images were first realigned to the first image of the series using a six-parameter, rigid-body transformation. The realigned images were then coregistered to each participant’s T1-weighted structural image and normalized to MNI152 space using high-dimensional warping implemented in the VBM8 toolbox. No additional smoothing was applied to the normalized images.

MRI analysis. A univariate GLM was used to create images for the prediction analysis. The model included separate boxcar regressors indicating the onset times for each stimulus, which allowed us to isolate responses to each emotion category. Separate regressors for the rating periods were included in the model but were not of interest. All regressors were convolved with the canonical HRF used in SPM and an additional six covariate regressors modeled for movement effects.

Pattern classification of occipital lobe responses to the film clips was performed using PLS discriminant analysis [following methods in (15)]. The data comprised 444 trials total (2 videos × 7 emotion categories × 32 participants, with four trials excluded because of technical issues during scanning). Measures of classification performance were estimated using eight-fold participant-independent cross-validation, where participants were randomly divided into eight groups; classification models were iteratively trained on data from all but one group, and model performance was assessed on data from the holdout group. This procedure was repeated until all data had been used for training and testing (eight-fold totals). Inference on model performance was made using permutation tests, where the above cross-validation procedure was repeated 1000 times with randomly permuted class labels to produce a null distribution for inference. The number of emotion categories that could be accurately discriminated from one another was estimated using discriminable cluster identification (see Supplementary Text for details). Inference on model weights (i.e., PLS parameter estimates) at each voxel was made via bootstrap resampling with a normal approximated interval.

Note: The content above has been extracted from a research article, so it may not display correctly.



Q&A
Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.



We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.