To compare the segmentation performance of our automated annotated and manually labelled training dataset we used StarDist [25], a convolutional neural network for segmenting star-convex objects such as cell nuclei. The network is based on U-Net [5], a well-established neural network used for high performance cell segmentation and detection tasks as shown in the Cell Tracking Challenge at ISBI 2015 and the 2018 Data Science Bowl [5,6]. No pre-trained models were used.

To evaluate the nuclei segmentation performance, we used identification of object-level errors. Instance segmentation results of our annotation method are compared with the ground truth of the equivalent dataset to compute the intersection over union (IoU) of all nuclei. True positives (TP), false positives (FP), true negatives (TN) and false negatives (FN). A minimum IoU threshold t (50% of ground truth) was selected to identify correctly segmented objects and any other predicted segmentation mask below the threshold was considered an error. With all TP, FP, TN and FN, a confusion matrix can be determined. Test accuracy parameters Precision (P(t), positive predictive value), Recall (R(t), sensitivity) and F1 score (F1(t), harmonic mean of precision and recall) are computed as follows:

The dataset contains 2D images of fluorescently stained nuclei. Widefield microscopy (Leica THUNDER Imager) was used for live cell imaging of Madin-Darby Canine Kidney (MDCK) cells. The cells were stained with SiR-Hoechst, a far-red DNA stain [26]. Cells were incubated with Verapamil to improve fluorescence signal [26]. Videos were acquired with a 20x (0.4 NA) air objective (Leica), a frame rate of 10 minutes and a total capture time of 15 hours. The training dataset consists of a total of 5 images containing 6409 nuclei. The test image contains a total of 792 nuclei. All nuclei were annotated manually in order to provide ground truth data. The training and test images were captured at different positions on the same sample. 16-Bit images with a 1:1 aspect ratio and a pixel size of 2048 x 2048 are used.

To prove our method with an individual dataset, we used a subset of 20 training images and 4 test images of the image set BBBC038v1, available from the Broad Bioimage Benchmark Collection [6].

The automated annotation process is computed on a CPU (1.6 GHz Dual-Core Intel Core i5).

For investigating instance segmentation performance, a neural network was trained on a Nvidia K80 graphics processing unit (GPU). No data augmentation [27,28] was used during the training process. The training process covers a total of 100 training cycles. The network was trained at a batch size of 4 and 256 training steps per epoch, an input patch size of 64, a number of rays for each object of 32, a grid parameter of 2 and an initial learning rate of 0.0003.

Note: The content above has been extracted from a research article, so it may not display correctly.



Q&A
Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.



We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.