Quantitative Validation of the NR Results

CS Chi-Tin Shih
NC Nan-Yow Chen
TW Ting-Yuan Wang
GH Guan-Wei He
GW Guo-Tzau Wang
YL Yen-Jen Lin
TL Ting-Kuo Lee
AC Ann-Shyn Chiang
request Request a Protocol
ask Ask a question
Favorite

A series of quantities were computed for comparing NR and human segmentation results, including the segmented voxel sets and global structural features as follows:

DCM: normalized centers of mass distance which is the difference of the positions of the two segmentation. RH and RNR are centers of mass vectors of human- and NR-segmented images, respectively. The voxels with non-zero intensity were treated equally with mass = 1.

where rH is the radius of gyration of human segmentation image. For some heavily tangled cases, |RNR-RH|rH was larger than 1. We used the “min” operator to keep DCM between 0 and 1.

DRG: normalized radius of gyration difference which is the difference of the sizes of the two segmentations.

where rNR is the radius of gyration of NR-segmented images, respectively. Again, for those cases |rNR-rH|rH larger than 1, we used the “min” operator to keep DRG between 0 and 1.

DI: normalized moment of inertia difference which is the difference of the rough shapes of the two segmentations. For an image, the principal moments of inertia were I1, I2, and I3, with I1I2I3. The normalized principal moments of inertia vector i were then defined as i=(1,I2I1,I3I1). iH and iNR were moments of inertia vectors of human- and NR-segmented images, respectively.

DPA: difference of the orientations of the principal axes which is the difference of the orientations of the two segmentations. For a given image, Ai was the principal axis corresponding to the principal moment of inertia, Ii (i = 1, 2, 3).

Recall: defined as the number of true positive voxels that existed in the human-segmented image and were correctly detected by NR, divided by the number of voxels in the human-segmented image. VH and VNR represent the set of the voxels in the human- and NR- segmented images, respectively.

Precision: defined as the number of voxels in the intersection of the human- and NR-segmented image divided by the number of voxels in the NR-segmented image.

SGlobal: Combining the comparisons of position of center of mass, image size, image orientation and voxel accuracy, we defined the global similarity between the human- and NR-segmented images as:

DCM, DRG, DI, DPA and R are all between 0 and 1 by definition. The value of SGlobal lies between 0 and 1. Note that the precision P is not included in the definition of SGlobal. As described previously, NR to segmented more details from the raw image. On the other hand, human tended to segment cleaner and sharper images. The fibers in the NR segmented image were usually thicker than the human segmented one. This caused the number of voxels in NR segmented image always larger than the human segmented one because of the large surface-volume ratio of the tree-like neuronal structure. Those extra voxels of the real features would be falsely counted in the “false positive” part and lower the precision. As a result, P values were not high for those neurons which were classified as “matched” according to the visual validation by biologists (red bars in Figure 4f). On the other hand, some of the broken cases had higher P because they didn't have those extra voxels. Thus, P was not included in the calculation of SGlobal. Those real false positive voxels which were not from the reason above would be reflected in the D values and decrease the global similarity.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A