All scanning was conducted at the University of Pittsburgh Medical Center on a Research dedicated 3 T Siemens Trio TIM scanner (Munich, Germany) using a 12-channel head coil. The baseline and end scan protocol included both a structural and functional image, while other scans collected only functional sequences. In this manuscript, we limited our analysis to the functional sequences, which were a resting state sequence, an explicit emotion regulation task sequence, and an emotional reactivity (faces/shapes) sequence.
An axial, whole brain 3D magnetization prepared rapid gradient echo (MPRAGE) was collected with repetition time (TR) = 2300 ms, echo time (TE) = 3.43 ms, flip angle (FA) = 9°, inversion time (TI) = 900 ms, field of view (FOV) = 256 × 224, 176 slices, 1 mm isotropic resolution and with Generalized Autocalibrating Partial Parallel Acquisition (GRAPPA) factor = 2. An axial, whole brain 2D fluid attenuated inversion recovery (FLAIR) was collected with TR = 9160 ms, TE = 90 ms, FA = 150°, TI = 2500 ms, FOV = 256 × 212, 48 slices, and 1 × 1 × 3 mm resolution.
An axial, whole brain (excluding cerebellum) echo planar (EPI) T2*-weighted functional image was collected to measure the blood oxygen level dependent (BOLD) response with TR = 2000 ms, TE = 34 ms, FA = 90°, FOV = 128 × 128, 28 slices, 2 × 2 × 4 mm resolution. The duration of the face/shapes task (see Functional Imaging Metrics) was 117 volumes (~4 min), the explicit emotion regulation task (see Functional Imaging Metrics) was 270 volumes (~9 min), and the resting state was 150 volumes (~5 min). Due to variability in placement by MR technicians the coverage of the functional scans was in general limited to above the cerebellum and below the top aspect of the motor cortex (though this varied slightly between functional sequences). Participants were instructed to lie awake and view a cross hair during resting state.
The face/shapes task is widely used and has been found to robustly activate the amygdala (Hariri et al., 2002, Hariri et al., 2003). Participants were instructed to match either a face cue or a shape cue. A cue was shown on the top center of the screen and they were instructed to respond with an MR-compatible glove (left or right index finger) by matching to one of two simultaneously presented faces. The facial expressions shown were either angry or fearful. During the shapes, they match a shape to one of simultaneously presented shapes. The shapes task (5 blocks) was interleaved with the faces task (4 blocks) and each block lasted 24 s containing 6 trials (4 s each). Before the beginning of each block participants were instructed visually to “match emotion” or “match form” (2 s). The face images are presented from a set of 12 different images (six per block, three of each sex) and are all derived from a standard set of pictures of facial affect. Stimulus presentation and responses were controlled using E-prime software (Psychology Software Tools, Inc., Pittsburgh).
Participants were shown emotionally neutral or negative images from the standardized International Affective Picture System (IAPS) (Lang, 2005) and were instructed to either “Look” or “Decrease.” This task has been described previously (Khalaf et al., 2016) and has been used to activate prefrontal cortex (especially the dorsolateral prefrontal cortex) as a means of explicitly regulating limbic reactivity. During the look instruction, participants were to view content naturally. During the decrease instruction, participants were instructed to reappraise negative images to actively alter the elicited emotion. A master level staff member instructed participants on how to reappraise prior to entering the scanner. After each image they were asked to rate how negatively they felt from 1 to 5. The neutral (11 events), negative (15 events), and negative regulate (15 events) conditions were interleaved and each event lasted 6 s. The inter-trial interval was 13 s with no jitter (though they were not locked to a TR). This allowed for modeling of each individual response by allowing for enough time in between each stimulus, but likely resulted in lower power to detect each individual effect. The images are presented from a set of images and stimulus presentation and responses were controlled using E-prime software (Psychology Software Tools, Inc., Pittsburgh).
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.