Self-report questionnaires: participants provided information in relation to their age, height, weight, current use of psychiatric medication and co-morbidities, ethnicity, clinical treatment status, living status (e.g. living with parents, in own accommodation), and level of educational attainment. We also asked participants questions based on their average sleep duration, specifically “on average how many hours of sleep do you think you get a night?”. The Eating Disorder Examination Questionnaire (EDE-Q [54];) was included to assess current ED-related symptomology. The trait subscale of the State-Trait Anxiety Inventory (STAI-T [55];) was included to assess trait anxiety.

Pattern separation task: Participants were given standardised instructions for the task at the beginning of the session, which explained the stages of the task, identified the response keys and finally gave the opportunity for questions. Participants completing the task in person were read instructions from a script, whereas participants administered the task remotely read an identical script. For this latter group of participants, instructions in the second part of the task (recognition phase) were read aloud by an automated male voice. These participants were also not accompanied when completing the task, although they were encouraged to contact a researcher who was available at the time of testing, if they had any questions. Timings of the task are standardised by the Stark Laboratory, facilitating comparisons across datasets. During the initial encoding phase, participants viewed 128 object images on a computer screen. The images were colour photographs of everyday objects on a white background (see [30] for more details). Participants were instructed to classify the images as either “indoor” or “outdoor” objects (based on their opinion), by pressing specific buttons on their keyboard. Participants receiving the task remotely were given practice trials, in order to demonstrate the task procedure, albeit due to technological limitations this was not possible in person. Each image was presented for 2 s, and the inter-stimulus interval was 0.5 s. The encoding phase lasted for 5.3 min in total. After a delay of several minutes, a retrieval phase began. During the retrieval phase, participants viewed 192 object images; one third (n = 64) of which were completely new images (foils) one third of which were identical to the images presented in the encoding phase (targets or repeats), and a final third which were similar to the images presented during the encoding phase (lures). Foil, target and lure stimuli were presented randomly. Participants were instructed to classify these images as “new”, “old” or “similar items”, by pressing specific buttons on their keyboard. Each image was presented for 2 s, and the inter-stimulus interval was 0.5 s. The retrieval phase lasted for 8 min in total. All participants were presented with object stimuli from Sets C or D [30].

As in previous studies (e.g. [30]), the Lure Discrimination Index (LDI) is the main outcome measure. LDI was calculated as the difference between the rate of “similar” responses given to the lure items minus the rate of “similar” responses given to the foil items. A higher LDI reflects better behavioural pattern separation performance. The traditional recognition memory score for repeat items was calculated as the difference between the rate of “old” responses given to repeat items minus the rate of “old” responses given to the foil items. A higher recognition score reflects better general recognition abilities.

Note: The content above has been extracted from a research article, so it may not display correctly.



Q&A
Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.



We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.