Stimuli and Design

FT Fabian-Alexander Tietze
LH Laura Hundertmark
MR Mandy Roy
MZ Michael Zerr
CS Christopher Sinke
DW Daniel Wiswede
MW Martin Walter
TM Thomas F. Münte
GS Gregor R. Szycik
ask Ask a question
Favorite

The chosen paradigm to elicit brain activity for AV speech stimulation has been successfully used in healthy subjects and different neuropsychiatric populations (Szycik et al., 2009a, b; Rüsseler et al., 2017). Stimuli, taken from the German part of the CELEX-Database (Baayen et al., 1993), comprised 70 disyllabic nouns with a Mannheim frequency 1,000,000 (MannMln) of at least one (see, Supplementary Table S1 for the stimulus list in our study). MannMln frequency serves as frequency measure indicating the occurrence of a word within the 6,000,000 words of the Mannheim word corpus. The stimuli were spoken by a female native speaker of German with linguistic experience and recorded by means of a digital camera and a microphone. The videos (4002 pixels resolution, 6° visual angle) showed the frontal view of the whole face of the speaker and were divided into periods of 2 s duration, accompanied by audio streams in mono-mode. The stimuli were randomly divided into two sets of 35 items each. The first set contained video segments with congruent AV information: Lip movements were fitting to the word spoken by the speaker. The second set consisted out of video sequences with incongruent AV information: Lip movements did not fit to the spoken words; e.g., video: Engel/angel and audio: Hase/rabbit. Auditory and visual information in the incongruent stimuli started simultaneously to the onset of vocalization. The participants were instructed to carefully watch and listen to the stimuli without being informed about the AV incongruence of some stimuli. The subjects were asked to keep attention on both modalities. To ensure the attention of the subjects to the stimuli, we used a simple semantic categorization task and analyzed the detection rate (answer rate for each stimulus). Subjects were asked to respond for each stimulus by pressing the left or right response device for stimuli describing living objects (five target stimuli) vs. objects of other categories (remaining 30 stimuli). The loudness of the presented stimuli was individually adjusted to the almost audible threshold for auditory comprehension. Firstly, the interaural loudness difference (due to individually applied ear plugs) for each subject was corrected by presenting a simple test tone and changing the sound pressure level (SPL) bilaterally until subjects signaled that the audible signal was equally loud in both ears. In the second step, we presented some test stimuli during the real scanner noise and increased SPL until subjects gave us a signal by the response device that they heard the stimuli well.

Presentation software (Neurobehavioral Systems, Inc., Albany, CA, United States) was used to deliver the stimuli. A “slow event related” design was used for the stimulus presentation. Each stimulation event was followed by a fixed resting period of 16 s duration. During this time a dark screen with a fixation cross at the position of the speaker’s mouth was presented. The duration of the whole functional stimulation part of the experiment was therefore 21 min.

Presenting the stimuli and communicating between the examination and control rooms was possible due to an fMRI compatible audio system integrated into earmuffs for reduction of residual background scanner noise. Visual stimuli were presented on a MRI compatible screen positioned in the front of the scanner. Subjects were able to see the screen through a mirror positioned on the top of the head coil. To ensure good visibility a detailed test picture with similar size and resolution as the video sequences was presented prior to the experiment and all participants were asked to report the content of this picture.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A