Semi-automatic segmentation: mEMbrain

FB Flavie Bidel
YM Yaron Meirovitch
RS Richard Lee Schalek
XL Xiaotang Lu
EP Elisa Catherine Pavarino
FY Fuming Yang
AP Adi Peleg
YW Yuelong Wu
TS Tal Shomrat
DB Daniel Raimund Berger
AS Adi Shaked
JL Jeff William Lichtman
BH Binyamin Hochner
AC Albert Cardona
request Request a Protocol
ask Ask a question
Favorite

To segment the aligned image stack (892 sections, volume 2.7 million μm3; Figure 1B), we applied an automated reconstruction pipeline to the volume using a machine learning reconstruction pipeline (Lee et al., 2017; Meirovitch et al., 2019). The inputs to this pipeline required fully annotating the image stacks of four manually picked volumes (image stacks ranged in volume from 21.6 to 153 µm3; Figure 1—figure supplement 2), which served as ground truth for training the artificial neural network. The annotation of this ground truth labeled two different categories: intracellular space, excluding cellular membranes (category 1), or any combination of cellular membrane and extracellular space (category 2). We leveraged the mEMbrain software package implemented in MATLAB (Pavarino et al., 2023) to obtain a first template of the cellular processes. This computation included training a deep neural network (Unet; Ronneberger et al., 2015) to classify pixels according to the two categories (accuracy above 93%) and produce a 2-dimensional instance segmentation of the individual cross-sections of all cellular compartments using Watersheds (Pavarino et al., 2023).

We used custom code to merge colored neuronal cross-sections across slices of the image stack if adjacent cross-sections matched in shape or if adjacent processes had sufficient overlapping cytoplasm far from any cellular membrane. Applying this conservative merging resulted in a 3-dimensional representation of object instances that each rarely overlapped two distinct neuronal or glial processes and also often the splitting of complete neuronal processes into several segmented objects. In most cases, however, the 2-D cross-sections were lawfully (fully) segmented into one object, and after a conservative agglomeration procedure based on shape matching these objects were correctly agglomerated over a few consecutive sections. We also applied 3D agglomeration procedures including mean affinity agglomeration suggested by Lee et al., 2017 and Meirovitch et al., 2019. However, due to membrane breaks these procedures often resulted in merge errors that were hard to manually proofread, hence, this agglomeration was not included in the circuit analysis. We found that the merge errors were associated with specific cellular and compartment types, including the shafts of the large neurons and many synaptic boutons. Because such preparation artifacts severely affected the quality of our fully automated reconstruction, we combined the automated segmentation with the manual methods described below.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A