Data reduction

HN Heather J. Nuske
GV Giacomo Vivanti
CD Cheryl Dissanayake
request Request a Protocol
ask Ask a question
Favorite

Pupil data, preprocessed to minimise large movement artefacts (see ‘Apparatus’ section), were further processed with a custom-built LabVIEW 2010 (National Instruments, Austin, Texas, USA) algorithm (Beaton, unpublished), based on previously published methodology (e.g. [62, 63]) to further screen out movement-related artefacts (including partial head turns and blinks). First, samples for which only one eye was tracked were eliminated (to minimise pupil size miscalculation due to head angle or ambient light exposure). Where both eyes were tracked, a mean pupil diameter across eyes was computed. Second, to remove extreme sample-to-sample changes in pupil diameter due to partial eyelid closures (common in samples either side of missing data due to blinks), samples outside 2 × standard deviations of the mean rate of change (calculated for each participant) were removed. After partial head turn- and blink-related artefacts were deleted, missing pupil data rates were calculated by group (pre-interpolation, whole video): happy condition: ASD group range = 1–70 %, M = 34 %, SD = 21 %, TD group range = 2–72 %, M = 24 %, SD = 20 %; fear condition: ASD group range = 2–77 %, M = 41 %, SD = 25 %, TD group range = 2–48 %, M = 22 %, SD = 15 %. Third, gaps in data, due to blinks, were only linearly interpolated between stable data points (traces) to a maximum of 350 ms [64, 65]. A trace was deemed stable if there were a minimum of 50 % of the samples in 2 × total length of the gap, pre- and post-gap. This method allowed for a differential threshold for linear interpolation, based on gap length and the reliability of the pre/post-gap data.

A relative percentage change measure of pupil dilation (increase in size) was calculated using the last 300 ms of the 1 s scrambled image (to avoid the pupillary light reflex; [66]) which appeared directly before the onset of the pre-box, as a baseline. The following formula was used:

where a is the percentage change from baseline to the following sections of the video: pre-box, emotional reaction, zoom in and post-box, b is the mean pupil diameter during pre-box, emotional reaction, zoom in and post-box video sections and c is the mean pupil diameter during the 300 ms before the onset of the pre-box (i.e. the last 300 ms of the scrambled image), per participant. To create a variable to represent social-emotional calibration we then subtracted the new relative pre-box variable from the post-box variable, where positive values equals more social-emotional calibration. As the pre- and post-box was visually identical, more pupil dilation in the post- vs. pre-box was taken as an index of learning about the happy- or fear-inducing contents of the box, through the actors’ emotional expressions shown in the scene.

Two sets of areas of interest (AOIs) were created with the Tobii Studio software. The first was for the happy and fearful faces to measure visual attention to these facial expressions (see Fig. 2) and the second was to measure visual attention to the pre- and post-boxes (these were the size of the whole screen). Visual attention data (total fixation duration within the face AOIs) was also extracted from Tobii Studio using a fixation filter (I-VT), using the default pre-sets (maximum gap length 75 ms, window length 20 ms, velocity threshold 30 degrees per second, maximum time between fixations 75 ms, maximum angle between fixations .5°), with the exception that the minimum fixation duration was set to 100 ms. This minimum fixation duration was chosen as eye-tracking data of 100 ms or more are not only more reliable than data tracked for shorter durations [67] but are also considered to be a reliable index of what elements in a scene are actually captured and processed [68].

Face areas of interest (AOIs) for the happy (1a) and fear (1b) conditions. These AOIs last the total time the emotional expressions are shown on the actor’s face (happy condition = 6.78 s, fear condition = 6.99 s), from when she reacts after opening the box to before the camera zooms back into the box close-up

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A