At the ages of 0.5, 1, 1.5, 2, and 3 months (for Lrat−/− and Lrat+/+ animals) and 5 months (for Lrat +/− and Lrat+/+), a vision-based behavioral assay was performed. A customized light/dark-box was used with dimensions of 100 × 50 × 40 cm (length × width × height). Half of the box was darkened. The box was placed in the same position in the room during every measurement to prevent possible light/shade interference. The animals were placed in the light area of the box and filmed for twenty minutes. Deep learning was used to extract key features. In short, a Faster Recursive Convolutional Neural Networks (Faster R-CNN) was used to locate and track the rat’s head. The Faster R-CNN was developed using the resnet18 architecture and trained on 658 randomly sampled, annotated video frames. After the detector was trained, it was deployed on each video. A transition zone, dark zone, and light zone were determined. The transition zone was defined as a circle centered at the doorway base with a radius of 1.25 times the doorway’s width. The rat was tracked in the light/dark box, and per frame, the rat’s location was tracked. After tracking the rat’s heads in each video, the data were processed using Matlab. A random subset of videos was selected to manually extract all parameters and compare the data to the values extracted via the Faster R-CNN-based algorithm. No significant differences were found between the manually extracted and automatically extracted parameters, confirming the automatic analysis’s robustness.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.