As part of a PhD project (Derzsi, 2017), we measured the spatio-temporal limits of depth perception. As a secondary experiment in the project which is essentially a replication of Norcia and Tyler's study, we collected 537 good trials from the EEG recordings of 4 participants (adults, 2 males, 2 females, age 23.5 ± 3.5 years). The project was approved by the Ethics Committee of the Faculty of Medical Sciences of Newcastle University.
We used two calibrated Dell P992 CRT monitors in a Wheatstone stereoscope configuration to create our stereoscopic stimulus. The participant's head was placed on a chin rest in front of the mirror, and the displays covered 40 × 40° visual angle. The refresh rate of the monitors was 100 Hz. A photo of the set-up is shown in Figure 4.
One cheerful participant wearing our 128-channel EGI electrode cap, sitting in front of the 3D display. The mirror assembly of the Wheatstone stereoscope is just behind her head.
We wrote a stimulus using Psychtoolbox (Pelli, 1997; Kleiner et al., 2007) which displayed a dynamic random-dot stereogram (Julesz, 1971), consisting of an equal number of black and white dots, presented on a 50% gray background. The mean luminance of the stimulus was 57.5 cd/m2, and the dot density was 0.06%. The locations of the dots were updated at every frame (10 ms).
The trials were executed by the participants, and they were short, between 6 and 8 s. Each trial featured a “dot onset” preamble of between 1 and 1.5 s where the dots were displayed with zero disparity. Then once this time had elapsed, the applied binocular disparity (“disparity onset”) alternated between ±0.05°, at a rate of 2.1 Hz, or 48 frames, as depicted in Figure 5. This alternation continued for a random time between 5 and 6 s. The EEG traces were then temporally aligned such that the onset of the disparity alternation occurred at t = 0 as per Figure 6 corresponded.
The stimulus used in our experiment and in Norcia and Tyler's experiment: a correlated random-dot stereogram plane that bounced in and out of the screen plane with positive and negative binocular disparity. Norcia and Tyler used a modified television set to create anaglyph stereograms, our experiment used two CRT monitors in a Wheatstone stereoscope arrangement (Norcia and Tyler, 1984). Copyright 1984, with permission from Elsevier.
Anatomy of a trial: the preamble was displayed for a random time between 1 and 1.5 s, then the modulated bouncing disparity appeared on the screen, for a random time between 5 and 6 s. The timing of the “disparity onset” event was recorded with millisecond precision.
We used Electrical Geodesics' (“EGI,” Eugene, Oregon, USA) 128 channel HydroCel Geodesic Sensor Net (GSN) system to record our EEG data. The electrode cap is connected to the participant using silver chloride electrodes, with sponges soaked in an electrolyte, which is made of saline solution with baby shampoo mixed in. For each channel, the impedances were kept below 50 kΩ. The signal was sampled at 1 kHz, and the “disparity onset” event was presented as a TTL signal that was directly coupled from the CRT monitor using a photodiode and a peak detector circuit.
In Net Station (EGI's proprietary EEG software) we filtered the continuous recordings between 0.1 and 70 Hz, and a narrow band-stop (notch) filter was also in place to reduce the effect of the 50 Hz mains hum. The recordings then were segmented to the “disparity onset” event within the trials, and further processing was done in Matlab. Trials containing cardiovascular artifacts, or eye blinks and other muscle movements were rejected. If a trial had more than 10% noisy channels that showed signs of electrode detachment, or the drying of electrolyte for example in the EEG signal, it was also rejected. For further analysis, we only used a single channel (no. 72 of the GSN), which was located just above the inion.
We analyzed the trials using our own code in Matlab, and some analysis was done using EEGLAB (Delorme and Makeig, 2004). We analyzed the simulation results and the EEG data the same way, with the exception that we investigated only the first harmonic of tagged frequency in the simulation.
In the spectral analysis, the neural response to the stimulus is detected by identifying a peak at the known temporal frequency of the stimulus, or a harmonic. In both spectral metrics (formula A and B in Table 1), we compared the sample's Fourier component value at these signal harmonics to every other frequency (i.e., the noise) in the analysis. We counted successful signal detection as occurring when the value at the harmonic is larger than the 95th percentile of the noise. The probability of false detection is calculated by the ratio of how many other peaks in the noise are above the 95th percentile, and how many Fourier components are included at distinct temporal frequencies in the analysis:
where S(f) is the value of the signal sample, Nnoise noise distribution at the frequency f.
Ssignal(f) is always a single component in the simulation. In the experimental data analysis, we used the first six harmonics of the temporal frequency of the periodic stimulus.
In the trials, we looked at the -compensated spectrum and the calculated coherency of the SSVEPs of one single channel at the central occipital area. Since the waveform of the temporal modulation of the stereogram's depth plane is a symmetrical square wave which only contains odd harmonics, and we know that the neural mechanism triggered is sensitive to changes in disparity, we expect the first derivative of this signal to be present in our EEG recordings. Therefore, we only consider the presence of the even harmonics to be linked to processing, and the odd harmonics to be the original signal passing through the human visual system.
The coherency values are compared against a large number (1,000) synthesized, phase-scrambled noise data sets. Unlike the bootstrapping operation, where the data would be re-sampled at a trial level, we generated our data sets with identical number of trials to the real data we analyzed. This allows us to calculate the 95th percentile of the noise distribution not just across the spectrum, but across data sets, and create a reliable measure of upper noise floor or “noise threshold.” If a coherency value is above this noise threshold, we know immediately that it is statistically significant. The exact probability can be worked out using formula 4 as well.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.
Tips for asking effective questions
+ Description
Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.