The lines scanned and registered here were incorporated into the locally run Zebrafish Brain Browser, which requires downloading and installing the free IDL runtime environment. ZBB2 (including software and full resolution datasets) can be downloaded from our website (https://science.nichd.nih.gov/confluence/display/burgess/Brain+Browser).
To increase accessibility we also implemented an online version of ZBB2 that does not require downloading, and runs in any javascript-enabled web-browser (http://zbbrowser.com). We used Bootstrap (http://getbootstrap.com/) for interface design and jQuery for event-handling (https://jquery.com/). For rendering of 2D slices and 3D projections, we used X3DOM, a powerful set of open-source 3D graphics libraries for web development which integrates the X3D file format into the HTML5 DOM (Behr et al., 2009; Congote, 2012; John et al., 2008; Polys and Wood, 2012). ZBB2 uses X3DOM’s built in MPRVolumeStyle and BoundaryEnhancementVolumeStyle functions to render 2D image files (texture atlases) in 3D space. The MPRVolumeStyle is used for the X, Y and Z slicer views to display a single slice from a 3D volume along a defined axis. We modified X3DOM source code for this volume style to support additional features including color selection, contrast and brightness controls, rendering of crosshairs, spatial search boxes and intersections between selected lines. The BoundaryEnhancementVolumeStyle renders the 3D projection. We also modified this function's source code, including additions of color, contrast, and brightness values. Other minor changes were made to the X3DOM libraries including a hardcoded override to allow additive blending of line colors. The online ZBB2 loads images of each line as a single 2D texture atlas. Image volumes for each line were converted to a montage, downsampling by taking every 4th plane in the z-dimension, and to 0.25, 0.5, and 0.75 their original size for low, medium, and high resolutions respectively to ensure rapid loading time. Texture atlas images were then referenced using X3DOM’s ‘ImageTextureAtlas’ node, and its ‘numberOfSlices’, ‘slicesOverX’, and ‘slicesOverY’ attributes were specified as 100, 10, and 10, respectively. These atlases were then referenced by ‘VolumeData’ nodes, along with an MPRVolumeStyle or BoundaryEnhancementVolumeStyle node, to build the volumes visible on the screen.
To implement the 3D spatial search in the online edition of ZBB2, we first binarized and 4x-downsampled the resolution of each line. The data for each line was then parsed into a single array (width, height, depth). We compressed adjacent binary values into a single byte using bit shifting operators, downsampling the data once again by eight times. While greatly downsized, the entire dataset was still much too large to quickly download. We therefore fragmented the array for each line into 8 × 8×8 blocks of 64 bytes each, and concatenated blocks for every line, creating a single array of around 17 kb for a specific sub-volume of the brain. After the user defines the search volume, relevant volume fragments are downloaded and searched. Data from each fragment file is passed to a JavaScript Web Worker, allowing each file to be searched in a separate thread. This procedure facilitates minimal search times, with the main limitation being that thousands of binary files must be regenerated whenever a new line is added to the library.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.