Automatic behavior characterization

BS Britton A. Sauerbrei
JG Jian-Zhong Guo
JC Jeremy D. Cohen
MM Matteo Mischiati
WG Wendy Guo
MK Mayank Kabra
NV Nakul Verma
BM Brett Mensh
KB Kristin Branson
AH Adam W. Hantman
request Request a Protocol
ask Ask a question
Favorite

Using an adaptation of the Janelia Automatic Animal Behavior Annotator (JAABA), we trained automatic behavior classifiers which input information from the video frames and output predictions of the behavior category -- lift, hand-open, grab, supination, at-mouth, and chew. We adapted JAABA to use Histogram of Oriented Gradients52 and Histogram of Optical Flow53 features derived directly from the video frames, instead of features derived from animal trajectories. The automatic behavior predictions were post-processed as described previously26 to find the first lift-hand-open-grab and supination-at-mouth-chew sequences. For the mid-reach thalamic perturbation experiments (Fig. 3c, ,E6),E6), we used the last lift detected before laser onset for aligning data. Tracking of hand position was performed using the Animal Part Tracker (APT) software package (https://github.com/kristinbranson/APT). Hand position was annotated manually for a set of training frames, and the cascaded pose regression54 algorithm was used to estimate the position of the hand in each remaining video frame. For the thalamus recordings and stimulation (Fig. 4, ,5),5), APT was used with the DeepLabCut algorithm55, and lifts were detected using threshold crossings of the hand velocity.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A