Driving behavior was assessed using commercially available driving simulator hardware and software (Carnetsoft® version 8.0, Groningen, The Netherlands). The driving simulator consisted of three 48′′ monitors (laterally angled at 45°) on regular desks with a horizontal field of view of 195° (for a graphical illustration of the setup, see Wechsler et al., 2018). A VW golf seat, Logitech G27 steering wheel (Logitech International S.A., Lausanne Switzerland), and gas and brake pedals were located at positions similar to a real car, and a conventional numeric keypad was mounted on the right side near the steering wheel. Numbers from 1 to 6 were visible on the keypad (two rows with three numbers), other keys were covered with black tape. A head set was used for task presentation and characteristic driving sounds. The seat and gas and brake pedals were individually adjustable to fit every participant’s comfortable driving position. Motion sickness was minimized by utilizing a research-grade simulator with wide-screen displays for smooth rendering of visual motion. The visual field around the displays was covered by black cloth to reduce perceptual conflicts between central and peripheral vision.
The driving scenario lasted about 25 min (25.7 km) and simulated a typical rural environment: a road that was slightly winding through a landscape consisting of grasslands, clouds, small trees, animal enclosures, hay rolls, construction sides, road signs, and gas stations. No intersections, traffic lights, cyclists, or pedestrians were included. Oncoming traffic comprised other cars and buses. Participants drove a VW Golf and followed a lead car. Another car followed at a reasonable distance behind the participant’s car. The lead car was programmed to drive at 70 km/h and slowed down slightly when the distance with the driver exceeded 100 m. Participants were instructed to drive as they normally would, and to follow the lead car at a reasonable distance with a speed of 70 km/h unless other speed limits (i.e., 40 km/h during braking tasks) were specified. They were not allowed to pass the lead car, and they were told that no cars will pass them. Ten braking sections were included in the driving environment. When reaching one of those sections, the lead car briefly braked: it slowed down to 40 km/h for about 6 s and then sped up again to 70 km/h. Braking sections, however, were not further considered in the present study, and they did not overlap with the additional tasks outlined below. If the participants’ car crashed (e.g., into the lead car or oncoming traffic, rarely a cow/tree), the front window shattered (including acoustic feedback) and the driver’s car was relocated between the rear and lead car. Participants practiced driving for 3–4 min (driving only) in the same environment used for data acquisition. They also practiced the additional tasks for 3–4 min (tasks only) while their car drove in autopilot mode in the same environment. Participants did not practice dual-task driving. Instructions on driving and additional tasks were provided verbally. All participants followed instructions correctly during practice trials and during data acquisition, without asking for repetitions or for slower speech. From this, we concluded that their language comprehension and hearing was not overly degraded.
During driving, participants executed different additional tasks. These tasks were modeled after typical real-life activities often performed during driving. To increase realism and to mimic the varying demands of everyday car driving, we provided different stimulus modalities (visual input on the windshield = in-vehicle display, auditory input via headphones = passengers, radio, or GPS), cognitive-motor task loads (i.e., baseline driving = no task, typing = dashboard operations, reasoning = conversation with passengers), and response modalities (typing = visuomotor responses, reasoning = verbal responses; Bock et al., 2018, 2019a). The number of trials (total trials N = 60) was equally distributed across the different task types and presentation modalities in both project phases. Tasks were scheduled in a mixed order and at irregular distance intervals. The driving scenario and the order and type of additional tasks was identical for all participants within each project phase (same seed; Bock et al., 2019a). Participants were instructed not to prioritize the driving or the additional task, but to respond as fast and as accurately as possible to the additional task. The following tasks were utilized:
The reasoning task required participants to verbally state an argument for or against an issue of general interest (e.g., “state an argument against using electric cars”, in German language). Requests were limited to 10 words per sentence (max. 80 characters, 54 pt. font size, max. two lines) and could not be simply answered with “yes” or “no”. The visual presentation lasted 5 s, auditory presentation varied between 3 and 4 s. Participants were instructed to respond verbally while continuing to drive. Answers were assessed as valid/not valid and protocolled by the experimenter. The typing task required participants to enter a 3-digit number (e.g., “345”) into the numeric keypad to the right of the steering wheel. Only numbers consisting of the digits 1–6 were presented, and only those digits were accessible on the keypad. The visual presentation lasted 5 s, auditory presentation lasted about 3 s. The numbers entered and reaction times for each number were recorded digitally by the software.
Only in project phase I an additional memorizing task was used that was presented similar to the two tasks described above. Participants had to memorize and compare gas station prices (visual) and traffic news (auditory), respectively. In project phase II the memorizing task was removed and replaced with trials of the reasoning and typing task to keep the total number of N = 60 trials the same. For the current analysis, we, therefore, used data only from the reasoning and the typing tasks excluding the memorizing task from all further analyses. Driving performance data, including lateral car position and velocity of the participants’ car, were recorded at 10 Hz. Preprocessing is detailed below (see “Driving Behavior” section). Performance of the additional tasks (reasoning and typing) was not evaluated in this study as we were only interested in driving behavior.
Fluid cognitive functions were assessed by computer-based tests adapted from literature, all of which were programmed in E-Prime 2.0 (Psychology Software Tools, Pittsburgh, PA, USA). Each test took about 10 min. Stimuli were presented on a 24′′ monitor (1,920 × 1,080 screen resolution). All stimuli were black and presented on a white screen background. Standardized instructions were displayed first, followed by up to three practice runs. Response feedback was provided after practice trials, but not after registered trials. All tests comprised six blocks of stimuli that were separated by inter-block breaks of 5 s (20 s after block 3). The response-stimulus interval was 800–1,200 ms; if there was no response on the preceding trial, the response-stimulus interval started after 2,000 ms. Participants responded by pressing the “X” or “M” key on a German keyboard with their left and right index finger. They were instructed to respond as fast and as accurately as possible. The reaction time of correct responses (RT) and the percentage of correct responses across all presented stimuli (ACC) were analyzed.
A visuospatial n-back test (2-back) was used to measure updating of working memory (“updating”; Schmiedek et al., 2009). Each block comprised a total of 19 dots that were sequentially presented for 500 ms in one field of a black 4 × 4 grid. Participants were asked to press the “M” key if the current dot appeared at the identical position as the dot two trials before (target), and to press the “X” key if the dot appeared at a different position (non-target). The first two stimuli of each trial were discarded from the analysis.
The Simon test was administered to measure inhibition (Simon and Wolf, 1963; Simon and Rudell, 1967). Each block included a total of 32 trials of left- or rightward pointing arrows that were sequentially presented for 500 ms to the left or right of a centered fixation cross. For 50% of the trials, the direction and position of the arrow were congruent (e.g., rightward arrow on the right side); for the other 50% of trials, they were incongruent (e.g., rightward arrow on the left side). Participants were instructed to press the left key (“X”) for leftward pointing arrows, and the right key (“M”) for rightward pointing arrows.
A spatial task switching test was used to measure shifting (modified from Kray and Lindenberger, 2000). Each block included a total of 17 trials that were sequentially presented in the middle of the screen for 1,500 ms. Each stimulus was either a circle or a rectangle and was either small or big. Participants had to respond to either the size (A) or the form (B) of the stimuli in the order AA-BB-AA-BB-AA-BB-AA-BB-A. Participants had to press the “X” key for small or circular stimuli, and the “M” key for big or rectangular stimuli. The first stimulus of each trial was not analyzed.
Cognitive processing speed was derived from the congruent Simon test condition. The congruent Simon test condition affords only simple reactions to the pointing direction (left/right) of arrows involving only little cognitive demand and therefore reflects simple cognitive processing speed.
Spiroergometry (ZAN600 CPET, nSpire Health, Oberthulba, Germany) on a stationary bicycle (Lode Corival cpet, Groningen, the Netherlands) was used to assess cardiovascular fitness. Participants were asked to avoid intake of caffeine and alcohol for 12 h and any vigorous physical activities for 24 h before testing. A ramp protocol was applied to test for submaximal exhaustion (Niemann et al., 2016; Hübner et al., 2019; Stute et al., 2020). Participants were instructed to maintain a cycling frequency between 60 and 80 revolutions per minute. In project phase I, participants started at 30 W initial load that increased progressively by 10 W (female) or 15 W (male) per minute. Participants of project phase II started at 10 W (female) or 20 W (male) initial load that increased progressively by 15 W (female) or 20 W (male) per min. Ramp protocols were preceded by a 3 min resting period and followed by a 5 min cool-down (1 min initial load, then no load). In total, protocols lasted about 15–20 min. Electrocardiography [ECG, recorded with a 10-lead ECG fully digital stress system; Kiss, GE Healthcare, Munich, Germany), breath-by-breath respiration (oxygen uptake (VO2), carbon dioxide output (VCO2)], heart rate, blood pressure (every 2 min), and wattage were continuously assessed. Further, the respiratory exchange ratio (VCO2/VO2) was simultaneously determined. A Borg’s ‘rate of perceived exertion’ scale (6–20: “very easy” – “very difficult”) was administered every 2 min to ask for perceived exertion during cycling. The protocol was stopped when participant’s respiratory exchange ratio remained > 1.05 for at least 30 s or exceeded 1.10, upon volitional fatigue, or occurrence of risk factors (i.e., heart rate, HR > about 220-age, blood pressure >230/115 mmHg, dizziness, cardiac arrhythmia, or other abnormalities). The outcome measure was peak oxygen consumption (VO2 peak), which has been proposed as a sufficient indicator of cardiovascular fitness (Rankovic et al., 2010).
Spiroergometry was supervised by an experienced sports scientist. In project phase I, the ramp protocol was preceded by an additional, less demanding, alternating 30 W/80 W protocol that was performed for approximately 10–15 min. Due to technical issues, five participants of project phase II were tested using a different spiroergometry device (Oxycon Pro, Erich Jaeger GmbH, Hoechberg, Germany); data, however, were comparable to the other participants following visual inspection and therefore were handled accordingly.
Motor coordinative fitness was assessed with a battery of three standardized tests for different domains of motor coordinative fitness (Voelcker-Rehage et al., 2010). Before each test, participants were shortly familiarized with the procedure and were controlled for correct performance. Time was kept using a stopwatch. The Purdue Pegboard Test (Purdue Pegboard test, model 32020, Lafayette Instruments, Lafayette, IN, USA) was administered to measure bimanual dexterity (Tiffin and Asher, 1948; Tiffin et al., 1985). Participants were asked to plug as many metal pegs as possible into two parallel rows (maximum 25 holes) of the pegboard with both hands simultaneously, from top to bottom, and hole by hole. Three runs were performed, each timed at 30 s. The outcome measure was the number of holes with correctly placed pegs, averaged across the three runs. The Feet Tapping Test was used to measure psychomotor speed (Voelcker-Rehage and Wiertz, 2003; Voelcker-Rehage et al., 2010). Participants were seated on a stationary chair and instructed to tap with both feet simultaneously back and forth across a mid-sagittal line on the floor for a duration of 20 s. They were instructed to move both feet completely across the line, with both soles flat on the floor. The outcome measure was the number of correct crossings, as assessed with a hand clicker. The better of two runs was selected for analysis. The One-Leg Standing Test with eyes open and eyes closed was performed to assess static balance (Ekdahl et al., 1989). Participants looked straight ahead and stood on one leg, while slightly flexing the other leg, for a maximum of 20 s (self-initiated). Eight runs were performed, four runs with eyes open and then four runs with eyes closed (two runs each per leg). Time was stopped when participants put down their lifted foot, pressed together their legs, hopped, or opened their eyes during closed eyes balancing. Due to a very distinct ceiling effect for eyes open balancing, only eyes closed balancing was analyzed. The outcome measure was the standing duration, averaged across all four runs of eyes closed balancing (Michikawa et al., 2009).
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.