In the APIC study [18], we tested whether the iPad loaded with iSurvey software would be a suitable tool to assess team performance and team member perceptions in the OR. Findings are based on an intervention study that tested the effectiveness of an Anesthesia Pre-Induction Checklist (APIC) using a control group design. We introduced the APIC to provide a check and briefing of safety-critical items immediately before the induction of anesthesia. The key aims of the checklist are to avoid omission errors and to improve situation awareness by promoting a shared mental model between all members of the anesthesia team.
The APIC study featured a multimethod approach comprising (1) onsite systematic observations of anesthesia inductions and (2) surveys of the observed anesthesia team members conducted immediately after the onsite observations.
We compared data from teams who used the APIC (intervention group) during anesthesia induction with teams who did not use the APIC (control group). Specifically, we tested the effects of the APIC on communication and technical performance of anesthesia teams and team members’ awareness of critical information, perceptions of safety, and perceptions of teamwork. Ethics approval was given by the ethics committee of the Canton of Zurich (KEK StV-Nr. 07/12), Zurich, Switzerland.
We observed a total of 205 anesthesia inductions in seven OR areas at the University Hospital Zurich, Zurich, Switzerland. We observed 105 teams (including a total of 285 team members, ie, doctors and nurses) before, and 100 teams (272 team members) after the introduction of the APIC.
In the following section, we will outline (1) factors that led to the decision to use an electronic data collection tool, (2) requirements that our desired data collection tool needed to fulfill, (3) factors that led to the use of the iSurvey software specifically, (4) how we created the iPad- and iSurvey-based data collection tool, and (5) how we applied the iPad- and iSurvey-based tool during the APIC study.
The decision to use an electronic data collection tool was based on the following considerations. First, we planned to observe anesthesia inductions in seven different operating areas situated in multiple locations of an academic hospital. Second, the study required large numbers of observations and involved an extensive data collection protocol (more than 60 items per observation). Third, the anesthesia teams were observed during and surveyed immediately after the anesthesia induction, which required us to use a fast and unobtrusive way to assess data.
A paper-and-pencil–based data collection method would have required the multiple data collectors to handle and keep track of large amounts of paper. We reasoned that this would have made data collection, storage, and management more time and energy consuming and more prone to errors when compared to an electronic data collection method. We thus sought a simple and reliable electronic method to collect our data.
Before deciding on a specific survey app to be used during our research project, we defined some criteria that we considered important. As wireless Internet access could not be guaranteed in all positions inside the operating areas, we required an app that provided offline data collection. Moreover, the creation of surveys had to be easy and straightforward, without the requirement of software-programming skills. The software had to be ready for data entry within a couple of seconds, and if the data collection was interrupted during an observation, the survey had to restart at the same position after pushing the start button. We also needed to be able to use a branching logic—mandatory questions that inhibit the continuation of the survey until a question has been answered and group questions to avoid switching between survey screens in order to minimize cognitive effort of the data collectors. Also, the answers had to remain saved when going back and forth between survey screens. Finally, the app also had to have a reasonable price to fit our research budget.
While planning the study in February 2012, before selecting a survey app, we downloaded and evaluated all survey apps that offered a free initial download on the US and Swiss Apple App Stores, using the search terms “survey” and “data collection,” in order to identify the app that best met our previously defined requirements. We evaluated the following apps: SurveyPocket by Jeremy Przasnyski (surveyanalytics site), Polldaddy by Auttomatic, Inc. (polldaddy site), iFormbuilder by Zerion Software, Inc. (iformbuilder site), and iSurvey (Harvestyourdata, Wellington, New Zealand). We also evaluated SurveyMonkey and Qualtrics, but these providers did not offer a solution that worked offline on an iPad. We decided to use iSurvey because it was the only one of these apps that saved answers when going back and forth between screens and allowed grouping of multiple answers on a single screen.
Once we made the decision to use iSurvey as the software for our data collection tool, we created the survey containing all the data collection protocol questions for the study in the password-protected user area of iSurvey site. The exact number of questions asked per observation varied because we used the iSurvey function to choose a branching logic. Using this feature, the survey directs the user to a prespecified question or information screen depending on how a question is answered. Also, control questions were included to verify that an observation was within the predefined study inclusion criteria. For example, if the question “Is this an emergency situation?” was answered with “yes,” the survey was terminated because only anesthesia inductions for elective surgery, and not emergency procedures, were to be included. We also used the function of iSurvey to randomize the order of the answers to a question to minimize common survey response biases such as the tendency to respond in the same direction on a series of questions regardless of the content.
We recruited 5 attending anesthesiologists (ie, each with more than 5 years of clinical anesthesia experience) to serve as expert raters of team performance and as data collectors for the survey of team member perceptions. Prior to the observations in the ORs, we conducted a training session. This session served to (1) explain the study procedure, (2) familiarize the expert raters with the data collection tool, (3) train the raters in observational skills, and (4) test the interrater reliability of the data collection tool. We explained to the expert raters how to start the iPad and iSurvey and how to upload data after an observation. We also conducted a rating of a videotaped anesthesia induction scenario together with the expert raters. To assess interrater agreement, we recorded three multi-angle videos of anesthesia induction scenarios showing different levels of team performance in a full-scale anesthesia simulator. All 5 expert raters then independently watched and rated the videos using the data collection tool, and Fleiss’ kappa was calculated.
During the data collection phase of the study, we conducted three meetings with all expert raters to address questions pertaining to the iPad- and iSurvey-based data collection tool. During the study, the expert raters answered general questions and questions about team performance (ie, communication and clinical performance). For example, a general question was “Which team member read the checklist? Consultant/resident/nurse.” An example of a team performance question was “Did the team talk about the patient allergies? Yes/no.”
The expert raters completed nominal scale-level questions (yes/no, and different choices, multiple-choice, and single best answer; for example (different choices single best answer), “Name of the OR area the observation is taking place in. OR area #1, OR area #2, etc”.
After the observation, the expert raters handed the iPad to each observed team member, who then individually and privately answered a short survey. The individual team members answered general questions and questions about their perceptions during the induction. For example, a general question was “My anesthesia experience in years? >1, 1-5, 5-10, >10 years,” or a question about team member perceptions was “How safe did I feel during this induction?” answered on a continuous Likert-type rating scale from 0% (very unsafe) to 100% (very safe). The anesthesia team members completed nominal scale-level questions, different choices (multiple-choice and single best answer), and interval scale-level questions (continuous rating scales).
The observation and team member survey procedure for anesthesia teams in both the APIC group and the control group did not differ, and teams in both groups were observed and surveyed equally by the same expert raters. These expert raters did not participate in the anesthesia inductions they observed or in the team member surveys after the inductions. Their sole purpose was to rate the anesthesia induction and administer the team member survey to the observed team members after the induction.
The collected data were downloaded from the password-protected user area from the iSurveysoft site as a MS Excel readable CSV (comma separated value) file for analysis. Figure 1 shows screenshots of the data collection tool used in this study. Figure 2 shows an example of how the data collection was conducted in the ORs.
Screenshots of the iPad- and iSurvey-based data collection tool. The left shows the tool asking the data collector to name the operating area in which the observation is taking place. The right shows example questions asked during the team member survey.
A data collector using the iPad- and iSurvey-based data collection tool to rate anesthesia team performance during a systematic onsite observation of team performance.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.