Data analysis

This protocol is extracted from research article:

Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images

**
eNeuro**,
Jan 12, 2021;
DOI:
10.1523/ENEURO.0200-20.2020

Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images

Procedure

Kriegeskorte et al. (2008) demonstrated that RDMs allow the direct comparison of neural representations between a monkey IT and human IT, although they used radically different measurement modalities for these two species (single-cell recording for monkeys and functional resonance imaging for humans). We used RDMs to compare the characteristics of the responses in the DCNN saliency map model with those of the neural representation in V1, V4, and IT.

We computed the representational dissimilarity (*RD*) between all pairs of natural object surfaces (Kriegeskorte et al., 2008; Hiramatsu et al., 2011; Goda et al., 2014) based on the firing rates of V1, V4, and IT neurons recorded by Tamura et al. (2016). To compute the RDMs, we standardized the mean firing rates based on the Gaussian distribution with a mean of zero and a variance of one with respect to each neuron in the visual cortices. We computed the representational dissimilarity *RD _{v}* between two natural object surfaces (

where *v* represents the visual cortices (V1, V4, or IT); *i* and *j* represent the natural object surface number ($1\le i,j\le 64$); *n* is the identity of the neuron; ${f}_{n,i}^{v}$ represents the firing rates of the neuron *n* in the visual cortex *v* when the object surface *#i* is presented; and $\overline{{f}_{i}^{v}}$ represents the mean rates of the neural population of *v* to the object surface *#i*. We computed the representational dissimilarity *RD _{v}(i, j)* across the population of biological neurons in the monkeys (Kiani et al., 2007; Haxby et al., 2011). The

In the same manner, we computed the representational dissimilarity *RD _{l}* between all input image pairs based on the activities of model neurons in the layer of the DCNN saliency map model as follows:

where *l* represents the layers in the DCNN saliency map model (Fig. 1*A*); ${a}_{n,i}^{l}$ represents the activities of model neuron *n* in layer *l* of the DCNN model with respect to the object surface *i*; and $\overline{{a}_{i}^{l}}$ represents the mean activities of the model neuron population of layer *l* to the object surface *i*. Note that we used all model neurons from all channels of each layer in the DCNN model to compute *RD _{l}(i, j)*. We summarized

We used Pearson’s correlation coefficient to quantify the correspondence between the RDMs for the monkey V1, V4, and IT and those for each layer of the DCNN saliency map model. The correspondence *r _{vl}* between visual cortices and the DCNN saliency map model is defined as follows:

where *v* and *l* represent the visual cortex (V1, V4, or IT) and the layer in the DCNN saliency map model (Fig. 1*A*), respectively. We computed *r _{vl}* using 2016 RDM elements representing response patterns with respect to distinct pairs of natural object surfaces. $\overline{RD}$ represents the mean intensity of these 2016 RDM elements. Because the intensity of the diagonal elements of the RDM [

To understand the characteristics of the responses in the DCNN saliency map model in greater detail, we computed the partial correlation of RDMs between the specific visual cortex and each layer of the DCNN saliency map model, which removed the effects of other visual cortices. The partial correlation is defined as follows:

where *r _{lx·y}* is the magnitude of the partial correlation between the activities of model neurons from the specific

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

Note: The content above has been extracted from a research article, so it may not display correctly.

Q&A

Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.