The paradigm was adapted from a previous study (16). While being under continuous video EEG monitoring for diagnostic purposes, patients performed an object-location memory task navigating freely in a circular virtual environment. The environment comprised a grassy plane (diameter of 9500 vu) bounded by a cylindrical cliff. Two mountains, a sun, and several clouds provided patients with distal orientation cues rendered at infinity (fig. S1). No intramaze landmark was shown. Patients completed the task on a laptop using the arrow keys for moving forward and turning left and right and the spacebar or backward key to indicate their response. Patients were asked to complete up to 160 trials but were instructed to pause or quit the task whenever they wanted. At the very beginning, patients collected eight everyday objects (randomly drawn from a total number of 12 potential objects) from different locations in the arena (“initial learning phase”). Objects appeared one after the other. This time period (variable duration of approximately 2 min, as the whole task was self-paced) was excluded from all analyses. Afterward, patients completed variable numbers of trials, depending on compliance. Each trial consisted of four different phases (Fig. 1A). First, one of the eight objects was presented for 2 s (cue presentation). Afterward, patients were asked to navigate to the associated goal location within the virtual environment (retrieval). During the retrieval period, the cue image was not present anymore. There was no delay period between cue presentation and the retrieval period. After patients had indicated their response via a button press at the assumed goal location, they received feedback depending on response accuracy (feedback; fixed duration of 1.5 s). Response accuracy was measured as the distance between the assumed goal location and the correct goal location (drop error). Last, the object was presented in the correct location, and patients had to collect the object to further improve their associative memory between the object and its goal location (re-encoding). After each trial, a fixation crosshair was shown for a variable duration of 3 to 5 s (uniformly distributed). Across trials, patients had to retrieve the cue objects in random order, preventing them from using a sequential learning strategy. Furthermore, starting locations were identical with ending locations from preceding trials and thus varied from trial to trial, preventing patients from using a response-based navigation strategy. Chance performance of drop errors was determined by randomly assigning response locations to correct goal locations 50,000 times per patient and averaging across trials, surrogate repetitions, and patients afterward to obtain one overall chance level value. Experimental events were written to a log file (temporal resolution of 20 ms). Speed was calculated as v = d/t, where d is the distance between consecutive locations within the virtual environment and t is the duration between corresponding time stamps. Triggers were either detected using a phototransistor attached to the screen marking onsets and offsets of the cue presentation phase or using an independent custom MATLAB (2017b, The MathWorks, Massachusetts) program that sent triggers both to the paradigm and to the iEEG recording software with randomly jittered intervals between 0.5 and 5 s. All of our analyses focused on the cue presentation and the retrieval period.

Note: The content above has been extracted from a research article, so it may not display correctly.

Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.

We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.