Carl Zeiss image files were imported into Imaris (v8.4 with FilamentTracer, Bitplane. Zurich, Switzerland) for quantitative analysis. In Imaris, Gephyrin.FingR puncta were rendered as 3D objects using an experimenter-set, manual background subtraction threshold to conservatively estimate puncta borders from EGFP signals for the entire field of view (FOV) using the following settings: 0.5 µm estimated diameter, split-touching object setting with the same 0.5 µm estimated diameter, all “quality”-filter identified puncta included with a 6-voxel minimum cut-off size. EGFP-labeled nuclei were digitally subtracted from Gephyrin.FingR puncta renderings. A small fraction of puncta that were not captured using these settings were manually generated by the experimenter during examination of each optical section for concordance between fluorescence signal and 3D renderings. PV-Syn boutons were rendered using the following settings: 0.6 µm estimated diameter, split-touching object setting with a 1 µm estimated diameter, including all “quality”-filter identified puncta with a 1 µm2 minimum surface area size cut-off. Instead of rendering PV-Syn boutons for the entire FOV, presynaptic bouton renderings were generated for a sub-volume region of interest (ROI) just large enough to encompass the target neuron’s soma and surrounding PV-Syn boutons. Individual Gephyrin.FingR puncta and PV-Syn bouton renderings at each target neuron’s soma surface were identified using a combination of distance from the cell’s nucleus and manual selection by the experimenter. Gephyrin.FingR puncta and PV-Syn bouton alignments were digitally determined if the edges of the rendered objects were less than 0.15 µm apart. This distance was slightly below the diffraction limit of our confocal images, accounted for small gaps between pre- and postsynaptic rendered objects related to conservative border estimations, and is consistent with previous reports using fluorescent object localization [3,112].
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.