Before starting the experiment, parents were invited to place their infant in a car seat mounted on a table in front of a computer screen (1280 × 1024 pixels) and to seat themselves on the chair located on their infant’s right side, so to create a triadic setting (see Figure 1). Because the main interest of this study was on the natural unfolding of the social interaction, and thus on participants’ automatic and spontaneous behavioral responses, parents were not explicitly told to watch the video displayed on the screen. Instead, they were simply instructed to behave as they would normally do and not to intervene unless the child sought for their attention. In the eventuality of distress or fussiness, parents were encouraged to comfort their babies by soothing them. Throughout the experimental paradigm, a twin camcorder (Panasonic HC-W570) recorded the triadic contexts on video: the main camera recorded the parent’s and the infant’s face and upper body, capturing them in the main window; the second camera recorded the emotional stimuli presented on the screen, capturing them on a small window.
Triadic setting.
Emotional stimuli were employed from the Amsterdam Dynamic Facial Expression Set (ADFES), which consists in validated and standardized dynamic videos featuring facial expressions of emotions [60]. Facial expressions are based upon prototypes of the “basic emotions” as described in the Facial Action Coding System (FACS) Investigator’s Guide [61]. In order to prevent gender biases, both female and male adults were included as stimuli. Participants were presented with eight blocks, two for each of the four models. Each block consisted of 5 trials: neutral, happy, sad, angry, and fearful facial expressions. Blocks always started with the neutral trial, intended as familiarization to the model’s face, and proceeded with presentation of the other trials in a randomized order. Each trial started with 500 ms of the attention-getter video (repeated if the child was not attentive), followed by 1000 ms of blank screen and 1500 ms of blurred face. The model’s facial expression became then clearly visible, and stayed neutrally static for 500 ms, followed by the dynamic unfolding of the emotional expression that reached the apex of the expressive behavior in 500 ms, and stayed at the apex for 5000 ms. Figure 2 illustrates the time flow of the trials and examples of emotional stimuli.
Time flow of the trials and examples of the emotional stimuli.
Observational data were analyzed at the micro-level (coding with units of 1 s or less, up to an accuracy of 1/25th of a second) using the computer software The Observer XT 12.5 [62]. The first four blocks, corresponding to one block for each of the four models, were systematically coded. Coding was restricted to the time intervals when there was a facial expression on the screen (i.e., 6000 ms). The coding of the infant and the parent were performed independently by distinct coders in separate coding sessions. The infant coder was trained to code the emotional stimulus as well. A total of six coders, three for each age group of the infant (i.e., six and twelve months), participated in the project.
Colonnesi et al. [11]’s coding system for emotional communication was adopted. Accordingly, facial expressions and gaze were coded as state events (i.e., duration in seconds) into specific mutually exclusive categories. Facial expressions were identified as positive when involving smiles and raising corners of the lips, and as negative when involving frowns or lowered-lip corners. Neutral facial expressions were coded when neither a positive nor a negative facial expression was displayed, as no muscle movement was visible, or the visible muscle movements were not indicative of an emotion. Gaze direction was coded as to the stimulus, the screen, the interaction partner, or elsewhere in case of neither of the prior ones. An additional coding category was included to filter the observational data on the basis of the emotional stimuli displayed by the videos on the screen, so to classify them as neutral, happy, sad, angry, or fearful.
A total of 14 videos for each age group of infants, corresponding to 26% and 21% of the recordings of the two groups, respectively, were randomly selected and double coded to perform inter-rater reliability. The mean average Cohen k’s yielded from the reliability coding of participants’ behaviors in the six- and twelve-month age groups were as follows: infant facial expressions 0.90 and 0.93; parent facial expressions 0.83 and 0.93; infant gaze 0.95 and 0.93; and parent gaze 0.97 and 0.96. These values are comparable to those of other studies that used similar microanalytic methods, e.g., [11], and are considered exceptionally good as the reliability was based on agreements not only in scored behavior, but also in time of coding.
Emotional mimicry and mutual attention scores were computed separately for each emotion. Emotional mimicry was quantified as the time (duration in seconds) spent in displaying a valence-congruent facial expression of the emotional stimuli. In other words, emotional mimicry was identified when participants displayed positive facial expressions (i.e., smiling) in response to the happy stimuli, and negative facial expressions (i.e., frowning/scowling) in response to the sad, angry, or fearful stimuli. Parent-infant mutual attention was quantified as the time (duration in seconds) spent by the infant and the parent simultaneously looking at one another. Figure 3 illustrates an example of data visualization.
Data visualization examples. (a) Emotional mimicry, operationalized as temporal co-occurrence) of the infant’s negative facial expression to the sad emotional stimulus. (b) Parent-infant mutual visual attention, operationalized as temporal co-occurrence of parent’s and infant’s gaze toward each other.
Finally, emotional mimicry and mutual attention were turned into percentage scores, based on the total length of the specific-emotion stimulus presentations (e.g., total time spent displaying positive facial expressions during the happy stimuli presentations/total time of the happy stimuli presentations * 100).
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.
Tips for asking effective questions
+ Description
Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.