Setup and procedures for the VR experiments

FW F. Wörgötter
FZ F. Ziaeetabar
SP S. Pfeiffer
OK O. Kaya
TK T. Kulvicius
MT M. Tamosiunaite
request Request a Protocol
ask Ask a question
Favorite

Experiments have been performed with 49 human participants. These experiments are not harmful and no sensitive data had been recorded and experimental data has been treated anonymously and only the instructions explained below had been given to the participants. All participants provided their informed consent to the experiments after we had explained to them the purpose and the procedure of the experiments. Experiments were performed in accordance with the ethical standards laid down by the 1964 Declaration of Helsinki. We followed the relevant guidelines of the Germany Psychological Society according to which these experiments, given the conditions explained above, do not need explicit approval by an Ethics Committee (Document: 205 28.09.2004 DPG:“Revision der auf die Forschung bezogenen ethischen Richtlinien”).

Procedures. We have used a Vive VR headset and controller released by HTC in April 2016 which features a resolution of 1080 × 1200 per eye. The main advantage of that over competing headsets is its “room scale” system, which allows for precise 3D motion tracking between two infrared base stations. This provides the opportunity to record and review actions for the experiment on a larger scale of up to 5 meters diagonally. Thus, using human demonstration of each individual action we have implemented ten actions: Hide, Cut, Chop, Take down, Put on top, Shake, Lay, Push, Uncover and Stir using differently colored blocks, where only the “Hand” was always red. Human demonstration results in jerk-free trajectories of the different moved blocks and actions look natural. For each action type, 30 different variants with different geometrical configurations and different numbers of distractors have been recorded (see Suppl. Material for example VR-videos). We performed experiments with 49 participants (m/f ratio: 34/15, age range 20–68 y, avg. 31.5 y) showing them these 300 actions in random order. Before starting, we showed 10 actions to each participant for training, where the selection panel was highlighting the shown action in green (Fig. 1A). In the actual experiments – some frames for the Hide action are shown in panel C – subjects had to press a button on the controller at the moment when they believed that they had recognized the action (Fig. 1B1). After button-press, the scene disappeared to avoid post-hoc recognition and the subjects could, without time pressure, use a pointer (Fig. 1B2) to indicate, which action they recognized.

The frames shown in Fig. 1, C correspond to the columns in the ESEC on the left side. At C1 the Hand is above the ground and at C2 it touches Object 1 (yellow field). Through this several other relations come now also into being (other entries at C2). Object 1 leaves the ground at C3 and at C4 the yellow field shows that it is registered as “touching” Object 2 (see note on the simulated vision process above). This leads to the emergence of many static and dynamic spatial relations (other entries in C4) and this is the column where the ESEC will unequivocally know that this is a Hide action. From there on, several more changes happen (C5–C7) until the action ends.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A