2.2. Simulating electrical and biochemical signals

MB Markus Breit
MS Martin Stepniewski
SG Stephan Grein
PG Pascal Gottmann
LR Lukas Reinhardt
GQ Gillian Queisser
request Request a Protocol
ask Ask a question
Favorite

Having established methods for generating network topologies in the previous section, we now focus on the steps from modeling electrical signals, handling membrane transport mechanisms, including synapses to discretizing the model equations by means of a new approach via finite volumes. Lastly we summarize parallel methods for efficiently solving large-scale networks.

We follow the well established cable theory (cf. Thompson, 1854; Scott, 1975) to model electrical signals on spatially resolved neuron morphologies. A neuron's morphology is given as a graph consisting of vertices in a three-dimensional space and edges connecting them. Common file formats for neuronal morphologies (such as SWC or HOC) contain radius or diameter values assigned to each vertex. We make use of this diameter in the most simplistic way, i.e., by supposing the morphology to be piecewise tubular, each piece being located around a vertex and with the radius associated to this vertex. With only very few modifications, we also implemented compartments shaped like truncated cones resulting in a continuous radius along the neurites, however, we restrict ourselves to the case of tubular compartments in the following description for the sake of simplicity. In each of the compartments, we impose the following equation expressing the membrane's role as an ideal capacitor:

where V is the membrane potential, Cm is the capacitance of the compartment and Iax, Im are the axial and transmembrane (inward) electric currents, respectively. The compartment's capacitance Cm depends on its shape and can be expressed in terms of a membrane-specific constant cm,

where a and l are the radius and the axial length of the compartment, respectively.

Axial currents need to be calculated at both ends of a compartment, at the interface with the neighboring compartments. They are assumed to be purely ohmic in nature and are expressed in terms of voltage between the two vertices associated with the neighboring compartments:

where rc is a material constant, the specific resistance of the cytosol, x is the axial coordinate, and x1, x2 as well as a1, a2 are depicted in Figure Figure3.3. Note that the former equation implicitly assumes that the extracellular potential is constant in space.

Illustration of the piece-wise tubular compartments of the Finite Volume cable equation model and the definition of axial ohmic flux.

Finally, the transmembrane current Im into the compartment is expressed in terms of electrical flux density im as

and depends on transport mechanisms (e.g., Hodgkin-Huxley-type channels, Na/K pumps, leakage), synapses and electrodes definable on the membrane.

In order to track individual ion species, concentrations for K+, Na+, and Ca2+ or any other ion type can be added to the model. Each of the species satisfies a diffusion-convection equation in axial direction and is coupled to transport mechanisms in the plasma membrane.

Note that as these ions are charged, they are affected by potential gradients in reality—and conversely, for the same reason, their concentrations directly affect the potential. A physically more accurate model of ionic movement in neurons incorporating both electric and diffusive properties of individual ion species is electro-diffusion. It has ben demonstrated that the modeling error introduced by using the cable equation can be prominent in thin compartments (Qian and Sejnowski, 1989) or where three-dimensional structural detail is concerned (Lopreore et al., 2008).

What is truly at the heart of most neuronal simulations is transport across membranes. We have defined an interface allowing the addition of arbitrary transport mechanisms to the electrical model in the transmembrane current density term im of Equation (2). These transport mechanisms are granted access to the underlying grid as well as to the unknowns of the voltage and ion species equations. Thus, they are able to declare and calculate their own sets of states, which may depend on given ones and vary in space and in time—like the gating parameters m, n and h in classical Hodgkin-Huxley-type channels governed by ordinary differential equations in time which depend on the membrane potential (Hodgkin and Huxley, 1952). As the dependence of inner states of membrane transport systems on the potential and on ion concentrations is typically strongly non-linear, we have decided (in the interest of fast computation) to include transmembrane currents only by an explicit scheme, i.e., inner states are updated before any time step of the solution process using only the solution from the previous time step.

The concept is not unlike the NMODL model description language for NEURON by Hines and Carnevale (1997, 2000). In fact, we have developed an automated translation unit that can convert existing NMODL files to C++ source code compilable in our framework.

Glutamate being the primary excitatory neurotransmitter in most synapses of the central nervous system, we define excitatory synaptic input localized at dendrites as the postsynaptic response of AMPA or NMDA receptors to presynaptic glutamate signals. AMPA and NMDA receptors, cation channels that become permeable in glutamate-bound state and thereby exhibit a conductance change in direct response to incoming presynaptic spikes, induce transmembrane flux of sodium, potassium and calcium ions causing a local excitatory depolarization of the membrane potential.

In our simulations we distinguish two general categories of synapses: Primary synapses connected to dendrites as the postsynaptic side,—they are used to initialize activity in single cells as well as networks and represent connections to other neurons not included in the simulation. The second category are synapses connecting dendrites and axons both present within a network morphology. We call these interconnecting synapses.

As there is no information on the presynaptic side of primary synapses, the common and simple approach of alpha functions provides a reasonable approximation to model postsynaptic conductance profiles (Roth and van Rossum, 2009):

where gmax denotes the maximal conductance, τ the rise/decay and tonset the arrival time of a single presynaptic spike. Note that gmax occurs at t = tonset+τ. The synaptic current Ips(t) is then defined by

with g(t) given by (3) for tonsetttonset + 6τ and g(t) = 0 otherwise. V(t) denotes the current postsynaptic membrane potential and Erev a reversal potential. For glutamatergic synapses, we use Erev ≈ 0 mV (Purves et al., 2001).

Interconnecting synapses are activated upon rise of the presynaptic membrane potential above a threshold Vth and the following current Iis(t) to the postsynaptic end is modeled according to a bi-exponential activity function:

where gmax is the maximal conductance; Erev is a reversal potential; τ1 and τ2 are constants regulating rise and decay time of the conductance; tmax designates the point in time (after initial activation) at which the conductance is maximal, and the factor n normalizes the conductance such that its value is gmax at tmax.

Synaptic currents—like all other trans-membrane currents—are evaluated using the solution for the potential of the previous time step only. This has significant benefits in parallel computation, as there is no direct coupling of solutions for the next time step between cells connected to one another by synapses.

Our implementation provides a method to set generic activation patterns for a given set of input synapses in the computational domain. To achieve that, we introduce the continuous random variables Xonset and Xτ for the timing parameters tonset and τ [cf. Equation (3)], respectively. Both of which we assume to be normally distributed, i.e., Xonset ~ N (μonset,σonset2) and Xτ ~ N (μτ,στ2) with probability density functions given by:

After specification of a peak conductance gmax, a mean onset time μonset and duration μτ of synaptic activity as well as corresponding standard deviation values σonset and στ, the parameters tonset and τ are set to random values drawn from the above normal distributions N (μonset,σonset2) and N (μτ,στ2), respectively.

Given neuron morphologies (defined as graphs in three-dimensional coordinate space), we attach all information parameterizing synapses to the dendritic edges they are associated to. The distribution is managed by the C++ class SynapseDistributor. It provides methods to create new or delete existing ones to user-specified statistical distributions.

In our studies, we assume a uniform distribution of nsyn ∈ ℕ synapses on the edge sample space

of the basal and apical dendrites. For this purpose, we consider a discrete random variable Xsyni, i∈{1, …, nsyn}, for the i-th synapse to be distributed. With every draw Xsyni can thereby take one of the edge indices j∈{1, …, nedge} as value, i.e., Xsyni=xji:=j. To account for the heterogenous edge lengths every edge index is assigned an associated probability given by the following probability mass function:

The exact location of the i-th synapse xji on the j-th edge is then drawn from a continuous uniform distribution in the range (0, 1).

We use a first-order (vertex-centered) Finite Volume (FV) scheme. This type of discretization method is well-suited for any type of problem resulting from a conservation law. In a FV scheme, one typically has a conservation formulation like the following:

where ρ is the density of a conserved quantity, j is a flux density of the same quantity. In our case, ρ represents the charge density for the voltage equation and the ionic concentration for the species equations; the flux densities are given by the electric current density and the ionic flux density, respectively. The conservation equation is then transformed into a system of ordinary—i.e., non-differential—equations by partitioning the domain on which the equation holds into so-called control volumes (in our case, those are exactly the compartments as defined above),

then by integrating Equation (11) on each control volume (thus ensuring local conservation)

and finally by assuming the unknown function to be part of some finite-dimensional space (in our case: piecewise linear) in order to be able to represent it by a finite number of unknowns which can be used to express the integrals explicitly in a system of ordinary equations,

where {ρk} are a known basis of the finite-dimensional function space; while {λk} are the coefficients in the corresponding representation of ρ and, at the same time, the unknowns of the resulting system of equations.

Time discretization is achieved by an Euler scheme, backwards with respect to axial fluxes and forward with respect to radial fluxes. The latter treatment results in a step size requirement for the time integration, the numerically well-known Courant-Friedrichs-Lewy (CFL) condition (Courant et al., 1928). In this particular case this condition states: The more trans-membrane flux there is, the smaller the time step has to be chosen. If the requirement is not met (i.e., if the time step size is chosen too big) the solution will “explode,” meaning that it will tend to infinity very rapidly. In order to prevent such instability, we calculate and use an estimate for the allowed step size. Thus, our time step is neither too big (“explosion”) nor too small (inefficiency).

Discretization is performed using the numerical framework UG 4 (Vogel et al., 2013). It is written in C++ and simulations can be set up and run using the widespread scripting language Lua, which makes this framework easy to use without learning a highly specialized language of its own.

Solution of the symmetric system of linear equations emerging from the discretization is also done within the UG 4 framework. The tree structure of neurons allows for an efficient usage (i.e., with linear runtime complexity in terms of the number of unknowns) of a direct solver if the unknowns are numbered in such a way that, in each line of the matrix, there is at most one non-zero entry to the right of the diagonal. We use a Cuthill-McKee (Cuthill and McKee, 1969) ordering to guarantee this. We solve by calculating the LU decomposition in a sparse matrix format.

As UG 4 comes with full MPI support for parallel calculations, the inevitable usage of large-scale computer facilities for the simulation of large networks is straight forward. Partitioning of the domain can be performed using Metis 5.0 (Karypis and Kumar, 1998) and can be achieved on two levels:

In large networks, whole neurons can be assigned to the processors (as described for Neuron in Migliore et al., 2006), resulting in an “embarrassing” parallelism, since there is no direct coupling between the neurons if synaptic events triggered on the presynaptic side in one time step are taken into account only in the next time step on the postsynaptic side. If whole neurons can be distributed in such a way that the processors' workloads are well balanced, this will be the preferred way of parallelizing, as the solution of the problem works exactly like in the serial case and communication is only needed at active synapses.

On a second parallelization level, it is also possible to cut neurons and assign their parts to different processors. The process of solving the system of equations is a little bit more involved then. Assuming the system to be solved on a processor is

then the iterative solving process on each processor is defined by the following pseudo-code:

 x0 = solution from the precedent time step

 d = d0 = bAx0 (“defect” vector)

 while |d| < |d0| · reductionFactor on any processor do

    c = A−1d (calculate correction)

    Sum up (over all processors) the corrections in all cutting points and store back in c.

    x = x+c (update solution)

    d = bAx (update defect)

 end while

In order for this to work, the process-wise matrices A need to be stored “additively,” i.e., the entries of the global system matrix must be equal to the sum of the corresponding entries in the process-wise matrices (where existent).

It usually takes about five to fifteen iterations until convergence is achieved, depending on how many neurons are cut and at which locations. Of course, in the case where no neuron is cut by the distribution of the network, the iteration will converge in one step. The gain in computation time from parallelizing on this level is not as big as from distributing whole neurons, obviously—however, it can still provide some speedup as it is not solving the system which takes the most time, but setting it up in the first place.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A