Derivation of network model with separate excitatory and inhibitory populations

DB David GT Barrett
SD Sophie Denève
CM Christian K Machens
request Request a Protocol
ask Ask a question
Favorite

When these ideas are applied to large and heterogeneous networks, we run into one problem concerning their biological interpretation. Individual neurons in these networks can sometimes target postsynaptic neurons with both excitatory and inhibitory synapses, so that they violate Dale’s law. To avoid this problem, we consider a network that consists of separate pools of excitatory and inhibitory neurons. The external signals feed only into the excitatory population, and the inhibitory population remains purely local. This scenario is captured by the equations,

The four connectivity matrices, ΩikEE, ΩikEI, ΩikIE, ΩikII are all assumed to have only positive entries, which enforces Dale’s law in this network. The neuron’s self-resets are now explicitly captured in the terms RiEsiE and RiIsiI, and are no longer part of the connectivity matrices themselves. We will now show how to design the four connectivity matrices such that the resulting network becomes functionally equivalent to the networks in the previous section. To do so, we apply the basic ideas outlined in Boerlin et al. (2013), and simplified due to W. Brendel, R. Bourdoukan, P. Vertechi, and the authors (personal communication).

We start with the network in the previous section, as described in Equation 5, and split its (non-Dalian) recurrent connectivity matrix, Ωik, into self-resets, excitatory connections, and inhibitory connections. The self-resets are simply the diagonal entries of - Ωik, see Equation 18,

where the choice of signs is dictated by the sign conventions in Equation 23 and Equation 24. The excitatory connections are the positive entries of Ωik, for which we define

where the notation [x]+ denotes a threshold-linear function, so that [x]+x if x > 0 and [x]+ = 0 otherwise. The inhibitory connections are the negative, off-diagonal entries of Ωik, for which we write

With these three definitions, we can re-express the recurrent connectivity as Ωik=ΩikEE-Wik-δikRk.

Using this split of the non-Dalian connectivity matrix, we can trivially rewrite Equation 5 as an equation for an excitatory population,

which is identical to Equation 23 above, except for the term with the inhibitory synapses, Wik, on the right-hand-side. Here, the inhibitory synapses are multiplied with skE rather than skI, and their number is identical to the number of excitatory synapses. Accordingly, this term violates Dale’s law, and we need to replace it with a genuine input from the inhibitory population. Since this non-Dalian term consists of a series of delta-functions, we can only replace it approximately, which will suffice for our purposes. We simply require that the genuine inhibitory input approximates the non-Dalian term on the level of the postsynaptic potentials, i.e., the level of filtered spike trains,

Here the left-hand-side is the given non-Dalian input, and the right-hand-side is the genuine inhibitory input, which we are free to manipulate and design.

The solution to the remaining design question is simple. In the last section, we showed how to design arbitrary spiking networks such that they track a given set of signals. Following these recipes, we assume that the inhibitory population output optimally tracks the (filtered) spike trains of the excitatory population. More specifically, we assume that we can reconstruct the instantaneous firing rates of the excitatory neurons, riE, by applying a linear readout to the instantaneous firing rates of the inhibitory neurons, rjI, so that

Note that all entries of 𝐃jI, the inhibitory decoding weights, must be positive. If the estimate of the excitatory firing rates closely tracks the real firing rates, then we can fulfil Equation 29.

To obtain this relation, we will assume that the inhibitory population minimizes its own loss function,

where the first term is the error incurred in reconstructing the excitatory firing rates, and the second term is a cost associated with the firing of the inhibitory neurons. Apart from the replacement of the old input signal, 𝐱, by the new ‘input signal’, 𝐫E, this is exactly the same loss function as in Equation 8. An important difference is that here the number of input signals, which corresponds to the number of excitatory neurons, may be larger than the number of output spike trains, which corresponds to the number of inhibitory neurons. Our spiking networks will in this case still minimize the mean-square-error above, though the representation could be lossy. Since the effective dimensionality explored by the instantaneous firing rates of the excitatory population is limited by the dimensionality of the input signal space, however, these losses are generally small for our purposes.

Given the above loss function, we can redo all derivations from the previous section, to obtain the connections, thresholds, and resets for the inhibitory population. Defining the short-hand

we find that

Here the first equation is equivalent to Equation 18, except for the different sign convention (ΩikII enters negatively in Equation 24), and for the subtraction of the self-reset term, which is not part of the recurrent connectivity. The second equation is equivalent to Equation 19. The third equation is equivalent to Equation 22, and the fourth equation is the self-reset term, containing the diagonal elements of Equation 18. Note that, due to the positivity of the inhibitory decoding weights, the two connectivity matrices have only positive entries, as presupposed above in Equation 24. Consequently, the inhibitory subnetwork obeys Dale’s law.

In a last step, we replace the excitatory spike trains in Equation 28 with their inhibitory approximation. Specifically, we rewrite Equation 29 as

where we used Equation 30 in the approximation step. This approximation generally works very well for larger networks; in smaller networks, finite size effects come into play.

Finally, when we take into account the results from the previous section, write 𝐃iE for the excitatory decoder weight of the i-th neuron, and introduce the short-hand

then we find that the excitatory subnetwork has the connections, thresholds, and resets (compare Equation 25Equation 27),

Importantly, the EI network therefore consists of two populations, one excitatory, one inhibitory, both of which minimize a loss function. As a consequence, the excitatory subnetwork will compensate optimally as long as the inhibitory subnetwork remains fully functional. In this limit, the excitatory subnetwork is essentially identical to the networks discussed in the previous section. The inhibitory subnetwork will also compensate optimally until its recovery boundary is reached. In the following, we will focus on one subnetwork (and one loss function again). For notational simplicity, we will leave out the superscript references ‘E’.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A