At the baseline laboratory session, participants completed a standard laboratory-based emotion differentiation task (Erbas, Ceulemans, Lee Pe, Koval, & Kuppens, 2014; Nook, Sasse, et al., 2018). Participants viewed 20 negative and 20 positive images from the International Affective Picture System (IAPS; Lang, Bradley, & Cuthbert, 2008) and rated how strongly each induced a set of emotions on a 10-point scale (1 = not at all, 10 = extremely). For negative images, they rated how strongly they felt five negative emotions (i.e., angry, ashamed, disgusted, sad, and scared) and for positive images, they rated how strongly they felt five positive emotions (i.e., calm, excited, happy, inspired, and interested). Images were selected to induce a range of negative or positive emotions. Images were presented for 5 seconds, and rating was self-paced. Images and emotions were presented in random order for each participant.
Following prior work, we computed negative emotion differentiation scores by calculating the intraclass correlation (ICC) between negative emotion ratings across the 20 images (Erbas et al., 2014; Kalokerinos et al., 2019; Nook, Sasse, et al., 2018; Pond et al., 2012; Tugade et al., 2004). Specifically, we followed the methods shared by Kalokerinos et al. (2019) and computed emotion differentiation scores by fisher-r-to-z transforming the ICC of consistency in average ratings across emotions (i.e., ICC(3,k)). Higher ICCs indicate greater similarity in how participants used each emotion scale (i.e., lower differentiation across emotions) and lower ICCs indicate less similarity across emotions. For interpretability, Fisher-transformed intraclass correlation coefficients were reverse scored by subtracting them from 1 so that higher scores represented greater emotion differentiation. A similar process was used to compute positive emotion differentiation scores from participants’ endorsement of positive emotions in response to positive images.
When prompted by the MetricWire app, participants responded to questions assessing perceived stress and affect in the current moment. In each prompt, stress was defined for participants through the text: “stress is a situation where a person feels upset because of something that happened unexpectedly or when they are unable to control important things in their life.” Participants then responded to the question “do you feel this kind of stress right now?” on a 7-point scale (1 = not at all, 7 = very stressed). We refer to these ratings as perceived stress scores.
At each MetricWire prompt, participants also rated their current feelings of depression and anxiety by responding to the questions “how depressed do you feel right now?” and “how anxious do you feel right now?” on 7-point scales (1 = not at all, 7 = very depressed/anxious). To distinguish these ratings from symptom inventories collected at the month-level, we refer to these ratings as depressed affect and anxious affect ratings, respectively.
Exposure to stressful life events was assessed at each monthly visit using the UCLA Life Stress Interview (Hammen et al., 2000) adapted for children and adolescents. This semi-structured interview was administered by a trained experimenter and assesses the impact of life events as objectively as possible in terms of both chronic stressors (e.g., interpersonal conflict with peers or family) and acute stressful life events (e.g., failing a test, break-up of a romantic relationship). The interview has been extensively validated and is widely considered to be the gold standard approach for assessing stressful life events and chronic stress.
Structured prompts query several domains of the participant’s life (i.e., peers, parents, household/extended family, neighborhood, school, academics, health, finance, and discrimination) for stressful life events. Each stressful event is then probed to determine timing, duration, impact, and coping resources. Trained experimenters objectively coded the impact of each event on an individual of the participant’s age and sex on a 10-point scale (1.0 = no negative impact, 5.0 = extremely severe negative impact, half-points included). Following prior work, we produced a stress impact score by taking the sum of the impact scores of all reported events (excluding those coded as 1). This score provides a weighted average of both the number and severity of stressors that occurred (Hammen et al., 2000). The interview was administered at baseline and at each of the monthly visits to measure stressful experiences occurring in the prior month. Stress measures were not obtained for four (i.e., 1.11%) observations.
Symptoms of depression were collected using the Personal Health Questionnaire-9 (PHQ-9), a well-validated and widely-used measure of depression (Kroenke, Spitzer, & Williams, 2001). At each session, participants reported on their depressive symptoms over the last two weeks. Item scores ranged from 0 to 2, with higher scores indicating higher depressive symptom severity. Responses demonstrated strong internal consistency across all time points in the current study (α = 0.84). Symptoms of generalized anxiety disorder were measured with the Generalized Anxiety Disorder-7 (GAD-7) (Spitzer, Kroenke, Williams, & Löwe, 2006). Like the PHQ-9, participants reported on anxiety symptoms occurring in the last two weeks. Item scores ranged from 0 to 3, with higher scores indicating greater symptoms severity. Reliability of the GAD-7 was high in the current study (α = 0.87). We refer to sums of monthly symptom inventories as measures of depression symptoms and anxiety symptoms, respectively.
Beyond the four missing stress measures, anxiety and depression symptom scores were not obtained for one additional observation (i.e., 1.39% total month-level observations). Within the remaining observations, participants did not respond to two GAD-7 items (i.e., < 1% of 2,485 total items) and to 14 PHQ-9 items (i.e., < 1% of 3,195 total items). These missing items were imputed when calculating depression and anxiety scores for each observation by assigning the mean value for the total scale at that assessment for these items.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.