Experimental setups and procedures
This protocol is extracted from research article:
Integrating vision and echolocation for navigation and perception in bats
Sci Adv, Jun 26, 2019; DOI: 10.1126/sciadv.aaw6503

Bimodal learning. The experiment took place in an acoustic flight room (4 m by 2.5 m by 2.2 m). Twelve ultrasonic microphones (Knowles FG) were used to record echolocation and were spread around the perimeter of the room. Audio was sampled and recorded using a 12-channel analog-to-digital (A/D) converter (UltraSoundGate 1216, Avisoft) with a sampling rate of 250,000 Hz. In addition, two infrared cameras were placed in the room to allow videoing of the bats’ behavior.

Rousettus bats were trained in a two-alternative forced choice task to discriminate between two wooden 3D targets differing in shape: a triangular prism (base, 24 cm; height, 22 cm; length, 16 cm) and a cylinder (diameter, 17 cm; length, 16 cm). From the bats’ takeoff platform, the targets appeared to the bats as a triangle and a circle with equal area (i.e., their 2D cross section facing the bat; see inset in Fig. 1A). The 3D shape of the target differed greatly, so they were supposed to provide ample acoustic and visual cues allowing their classification. Two bats were trained to land on the prism, and two were trained to land on the cylinder. Another bat that was trained to land on the prism was disqualified since it did not echolocate. The bats were placed by the experimenter on a starting platform 2.8 m from the targets, from which they initiated the flight toward the targets. They were allowed to fly freely and scan the 3D shapes of the targets from all angles. If, during this process, the bats landed on one of the walls and not the target, then they were returned by the experimenter back to the starting platform. Correct choices (i.e., landing on the right target) were rewarded with fruit puree given from a syringe, which was controlled by the experimenter. Wrong choices were punished using an aversive noise. After landing, the experimenter removed the bat from the target and changed the location of the targets (while they were not visible to the bat; see below). The experimenter then placed the bat on the platform with her back to the targets and then moved to the corner of the room in a stereotypical manner to avoid providing any cue to the bats. The bats typically took off immediately. The fact that approximately half of the bats did not succeed in the sensory translation test (cross-modal recognition) and that the bats performed at chance level under all control conditions throughout the experiments implies that they were not cued by the experimenter. The bats were trained in this manner 3 days a week, with each session lasting approximately 30 min (including ~26 to 30 trials).

The targets were mounted on two poles 1 m apart. These poles were a part of an apparatus that rotated around a fixed axis. This allowed the experimenter to easily switch the locations of the two targets between two possible fixed locations (see inset in Fig. 1A). To prevent the usage of spatial memory, the targets switched locations in a pseudorandom order, with each target appearing in the same location in no more than three consecutive trials. Nevertheless, sometimes, during training, the bats fixated on one location for multiple trials. In these cases, the experimenter placed the correct choice target in the opposite location for several trials until the bat chose it. The apparatus was rotated after every trial regardless of whether the targets changed their location or not to prevent the learning of auditory cues that might imply where the correct target is.

The bats were first trained to discriminate the targets with both modalities available. Lights were turned on (approximately 2 lux) to allow vision. The bats were trained until they reached a criterion of three consecutive days with 75% correct choice.

To test which modality the bats used, we abolished each of the modalities separately. We first abolished vision (echolocation-only training) and then echolocation (vision-only training). Last, we tested the bats with both modalities abolished. Since the bats did not spontaneously succeed when both modalities were abolished (Fig. 1D), they were definitely using one of these modalities for learning (Fig. 1A). The inability of the bats to perform with echolocation only (Fig. 1B) compared to the immediate success with vision only (Fig. 1C) implies that there was no order effect for the conditions tested.

All light sources in the room were abolished, and the bats were trained in complete darkness. This permitted the use of echolocation only. In these training sessions, the experimenter used night vision goggles.

The training occurred in dim light (approximately 2 lux) with echolocation blocked. To abolish the usage of echolocation, the wooden targets were replaced with targets of the same shape, size, and color but made of foam. Foam is much less reflective acoustically than wood, thus reducing the possibility to use echoes. To ensure that even the weak echoes reflected from the foam were not used by the bats, pink noise was played to mask the echoes. The noise was played from two speakers (Vifa) and placed 0.6 m behind and 0.4 m below each target facing the direction from which the bats approached the targets. The speakers were connected to an UltraSoundGate player 116 device (Avisoft). The speakers played noise that was measured to be 90 dB at 1 m at 30 kHz, the peak intensity of Rousettus echolocation (21).

In both vision-only and echolocation-only conditions, the bats were trained until they reached the criterion of three consecutive sessions with 75% success or for 10 sessions because this was the maximum number of sessions they needed to first reach 75% in the bimodal training, whichever came first. We considered these two conditions as training because the bats were rewarded for correct decisions.

The bats were trained in complete darkness with the foam targets and noise playback. The main purpose of this condition was to ensure that the bats did not use any olfactory cues from the foam targets. If the bats used either acoustic or visual cues from the targets, then their performance should be at chance level under this condition.

The targets were ensonified using a speaker (Vifa), connected to an UltraSoundGate player 116 device (Avisoft), and a 46DD-FV 1/8″ constant current power calibrated microphone (GRAS) placed on top of the speaker. The speaker played a 2.5-ms-long, 95- to 15-kHz down sweep, and the microphone recorded the echo’s sound pressure. Sampling rate of both the signal and the recording was 375 kHz. The microphone and speaker were placed on a tripod 1 m from the target, which was also placed on a tripod.

We used an in-house software written in MATLAB (MathWorks, 2015) to analyze audio recordings. Using the time difference of arrival of the echolocation pulses to the different microphones in the array, we reconstructed the bat’s 3D flight trajectory under the bimodal condition. We then calculated the angle between the bat and the target’s main axis for both targets in every location. We analyzed data of two bats (I and B) from the first two learning sessions in the bimodal training. We only used correct choice trials. In total, seven trials were analyzed per bat, three with the rewarded target to the left and four with the rewarded target to the right.

We also analyzed echolocation rates for all bats on both the bimodal condition and echolocation only. The analysis was conducted on 10 trials per bat from the first two sessions in every condition using the loudest channel.

Cross-modal recognition. The cross-modal recognition experiment was conducted twice with different bats (Table 1) and slightly different targets, in a flight room similar to the one used in the bimodal learning experiment, equipped with the same ultrasonic microphones and infrared cameras (Fig. 2B). In both rounds, Rousettus bats were trained in a two-alternative forced choice task to discriminate between a textured and a smooth target. They were first trained and tested in complete darkness (<10−7lux, i.e., using only echolocation). Then, they were tested in dim light (under conditions where only vision can be used; see below) to examine cross-modal recognition. Last, they were tested without visual or acoustic cues to rule out the use of alternative cues.

In the first round, the bats were presented with two identical plastic targets (15 cm by 10 cm by 15 cm) that differed only in texture; one of the targets was smooth, and the other one was perforated with 1-cm-deep holes on four of its side and 5-cm-deep holes on two opposite sides (see inset in Fig. 2A). The bats had to land on the smooth target to receive fruit puree presented in a small 5-cm-diameter bowl. To control for olfactory cues, both targets had bowls with fruit on their upper face (where the bats landed). Both bowls were covered with a fine mesh made of fishing wires (0.5-mm diameter). The feeder on the smooth target had wide openings of 1.5 cm between two wires, allowing the bats access to the food, while the feeder on the perforated target had narrower openings of 0.5 cm that prevented any access. Food was frequently replenished by the experimenter to equalize odor cues. We confirmed that the bats could not recognize the targets based on this difference between the mesh on the bowls (see below). The targets were mounted on poles at the center of the room in two fixed locations. The two targets’ positions were switched in a pseudorandom order (as in the “Bimodal learning” under “Experimental setups and procedures” section). The targets were always removed from the poles and placed on them again, regardless of whether their location changed or not to eliminate any acoustic cues.

The bats were trained in complete darkness daily, 5 days a week, with each session lasting 30 min. The bats took off from one of the corners of the room, which they have established as their home base. Flights were initiated by the bats. After a bat had landed on one of the targets, the experimenter encouraged it to fly back to the wall (by gently touching it) so that the location of the targets could be changed. Night vision goggles were used by the experimenter throughout the experiment. We were extra careful that the bats never see the targets. We thus only revealed them every day after assuring that the room was completely dark.

Test trials began once the bats reached a criterion of 75% correct choices on three consecutive days. All three types of test trials (see below) differed from training trials in that they had no reward to prevent learning (that is, giving reward in light trials would have resulted in bats relearning the task visually, instead of “translating” the information they gained with echolocation). To ensure that bats continued to land on the targets even in the absence of a food reward, test trials were interspersed between regular rewarded training trials, randomly separated by one to three training trials. During test trials, both feeders were blocked with the same dense mesh, preventing access to the food in both of them.

These tests were performed in complete darkness to validate the learning. A total of 42 to 49 of these trials were performed per bat.

After the bats finished the test trials in the dark, they were tested in dim light (approximately 2 lux). These trials were also embedded within a regular (dark) training session, and no food reward was given. In these test trials, the targets were placed inside plastic cubes to allow the usage of vision but not echolocation (the echoes of both targets were identical as was validated; see control below). These trials were also randomly spread within the regular (darkness) training trials. At the beginning of each visual test trial, the targets were placed in the plastic cube, and they were removed after the bat landed. Twenty trials per bat were performed (and not more) to prevent extinction of the original learning.

The bats were also tested in the dark with the plastic cubes covering the targets. This allowed us to examine whether they relied on any other cue except for the texture differences (e.g., olfactory cues or acoustical cues from the feeders). If texture-related acoustical information was used, then it is expected that under these control conditions, the bats will perform at chance level. Twenty of these trials were performed per bat.

All comparisons were performed with a one-tailed binomial test relative to chance level (50% success). Tests were one-sided because of our assumption that training will improve performance.

See “Bimodal learning” under “Experimental setups and procedures” section. In addition to recording from an azimuth of 0°, the targets were ensonified from 22.5° and 45° to test the influence of the holes on the spectra. The echoes recorded at the same angle were averaged.

In the second round of the experiment, the same room as in the bimodal learning experiment was used. In this experiment, the bats were presented with slightly different targets (15 cm by 15 cm by 15 cm) from the first round of cross-modal recognition experiment: In this experiment, the perforated target had only the two 5-cm-deep holes (without the 1-cm holes; see inset in Fig. 2B) on two parallel faces, which might have made the translation to vision more difficult. The second target was smooth, as in the first round. The targets were changed because this experiment was part of a more comprehensive experiment (whose results will be published elsewhere) aiming to assess depth sensitivity in Rousettus. In addition, for the same reason, the location of the takeoff and targets was changed (see Fig. 2, A and B) to ensure that the bats take off from equal distance from the two targets. In addition, to ensure that the bats do not have a bias toward the smooth target (since they were all trained to land on it in the previous round), in this round, two of the bats were rewarded for flying toward the smooth target, and three were rewarded for flying to the textured target. As in the bimodal learning experiment, the bats were released from the experimenter’s hand to a starting platform from which they initiated their flights. Food reward was given by the experimenter, which then returned the bat to the starting platform for a new trial. Cross-modal recognition was tested in the same manner as in round 1: training in complete darkness, testing in the dark, testing in the light, and control (30 trials per condition).

Sensory weighing. The experiment took place in an acoustic room (4 m by 2.2 m by 2.4 m). A large two-arm maze (1.8 m by 3 m by 1.8 m), which allowed bats to fly, was set up in the middle of the room (Fig. 3A). The maze’s walls and ceiling were made of white tarpaulin, which strongly reflects sound, and did not allow the bats to land on them. A landing platform made of foam (70 cm by 45 cm) was hung at the end of each arm.

One of the arms was blocked with a wall, which we manipulated to control the visual and acoustic cues the bats were receiving. (i) We manipulated the color of the highly acoustically reflective plastic wall testing white versus black walls (which were identical other than the color). This procedure altered the visual cues only but maintained the same acoustic information (both walls were equally reflective). The bats were tested in three different light levels (see below). (ii) We manipulated the reflectivity of the blocking wall (using a foam instead of a plastic wall) but kept its color identical (black). This procedure manipulated only the acoustic information but maintained the visual cues. The bats were tested under this condition only in the lowest light level (3 × 10−5 lux) because we expected that echolocation will be more dominant in this light level. The choice of the bats between a blocked corridor with different sensory information (e.g., white versus black walls) and between the same stimulus (i.e., an open arm that never changed) allowed us to reveal which sensory cues they were relying on when making their decision.

Two high-speed infrared cameras (OptiTrack, NaturalPoint) were placed 0.5 m above ground at the entrance of each arm (1.4 m into the maze) facing up and recorded at 125 frames/s. An ultrasonic microphone (UltraSoundGate CM16/CMPA, Avisoft) was placed on a tripod in front of the wall separating the two arms of the maze 1.35 m above the ground. The microphone was facing the main corridor and was tilted upward by 45°. The microphone was connected to an A/D converter (Hm116, Avisoft) and recorded audio at a sampling rate of 250,000 Hz.

We tested naïve bats in this experiment; each bat performed one flight without any training and no more. Each bat was kept in a carrying cage for 15 min in the acoustic room, outside the maze, to allow its eyes to adapt to the dark. Then, the experimenter released the bat from the hand while seated on a chair at the maze entrance (1.50 m above ground). The bat was encouraged to fly if it stayed on the hand longer than a few seconds. If it did not fly toward one of the two arms (but hovered or turned) for three attempts, then the bat was disqualified. We also disqualified bats that did not echolocate or trials in which no video was recorded (Table 1). The microphone and cameras were triggered by another experimenter sitting outside the experimental room at the moment of release and were set to record until the bat entered one of the arms. Light level was adjusted to either 5 × 10−2, 2 × 10−3, or 3 × 10−5 lux using four LED light sources at the ceiling of the acoustic room outside the maze, which allowed homogeneous lighting (above the tarpaulin ceiling). A 3 × 10−5 lux was chosen because it is very close to the threshold for vision of these bats (17) and at this light level, to the visual eye, the black walls blocking one of the arms appeared like an opening to a cave. The other light levels allowed more visual information.

When manipulating the color of the wall, we presented the two colored walls on both sides of the maze in all light levels to ensure that there was no effect of the maze itself or its lighting on the behavior of the bats. We found no difference (choice of open versus blocked arm between different sides of the maze with black wall, P = 1; same choice for white wall, P = 0.59; behavior after entering the blocked arm between different sides of the maze with black wall, P = 0.1; Fisher’s exact test for all comparisons; see “Statistical analysis”). Because there was no basal preference to one arm and because each bat only flew once, the trials with the nonreflective wall were performed on the left side only for convenience.

To assure that R. aegyptiacus were capable of detecting the reflective wall acoustically (i.e., based on echolocation) in the experimental setup before choosing an arm to fly into, 12 bats were trained in complete darkness (<10−6 lux, lower than their vision threshold) to acoustically detect the open arm, rather than the blocked one, and fly into it. During training, bats were released from the base of the two-arm maze, and correct choices (i.e., flying to the open arm) were rewarded by allowing the bat to hang on the landing platform at the edge of the arm for 2 min (wild bats prefer this over being handled). Bats that agreed to drink mango juice from a syringe were also rewarded with juice. Bats that reached criterion of 10 consecutive correct choices within 3 hours of training were tested in complete darkness (10−7 lux). The tests included 20 trials per bat, with the blocking wall positioned on alternative sides in a pseudorandom order, with no more than 3 consecutive trials on the same side.

Audio recordings were examined using SASLab (Avisoft) to ensure that the bats echolocated. All videos were analyzed by an experimenter and categorized to trials in which the bat entered the blocked arm and trials in which the bat entered the open arm. Trials in which bats entered the blocked arm were further subdivided to trials where the bat approached the wall but then turned back, trials in which the bat attempted to land, and trials in which the bat collided. These trials were classified by three independent observers. The scores of the observers (e.g., proportion of bats colliding per condition) were averaged. Bats whose video was not obtained because of technical failure or bats whose recording did not show echolocation were also disqualified because we aimed to study multimodal decision making (Table 1).

To calculate echolocation rate, we used trials of bats that collided or attempted to land on the blocked arm. We used the moment of contact with the wall to sync video and audio of the trial. In total, we had 19 trials for the reflective wall and 41 for the nonreflective wall. Each trial was divided into two halves using the time of flight, and echolocation rate was calculated per each half and then averaged across bats. The echolocation rates are thus estimates for when the bat was closer to or farther from the wall.

We first tested the preference for the blocked versus the open arm with a binomial test relative to chance. We then tested whether there was a difference in bats’ preference of the open versus the blocked arm under the different conditions. The data were compared using the chi-square test for independence unless >20% of the table cells had expected value of <5. In these cases, data were compared with Fisher’s exact test. Last, we tested whether the behavior of the bats (i.e., turn back, attempt to land, and collide) after they entered the arm varied under different conditions (reflective versus nonreflective walls). For this comparison, chi-square and Fisher’s test were used as well.

The targets were ensonified in the same manner as the bimodal learning experiment. After the ensonification of the targets, the microphone was placed instead of the target to record the speaker’s incident sound pressure. We then calculated the target strength by dividing the peak intensities of the incident and echo sound pressure for each target.

Note: The content above has been extracted from a research article, so it may not display correctly.

Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.

We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.