Reduction of the Number of Synapses

Anything that doesn't fit elsewhere.
Post Reply
cafischer

Reduction of the Number of Synapses

Post by cafischer »

I built a model of a neuron and want to stimulate it with synapses. Usually one neuron in the brain is stimulated with thousands of synapses. I was trying to think of a way to reduce this number in my simulations in order to reduce computation time. My idea was the following:

1. Compute spike trains for every synapse.
2. Compute the conductance of each time step per synapse (before simulation). Using e.g. G = weight * factor * (exp(-t/tau2) - exp(-t/tau1))
3. Adding the conductances.
4. Calculate the current during the simulation using a point process with the formula: i = G * (v - e).

My questions are:
Firstly whether there are better approaches to reduce the number of synapses. (For instance one could also use less synapses and increase the frequency, but this leads to saturation which is not there when several synapses are used.)
Secondly if my proposal is reasonable and how step 4 could be implemented in Neuron (maybe making a point process which receives the conductances via netcon in the net receive block.)
ted
Site Admin
Posts: 6299
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Reduction of the Number of Synapses

Post by ted »

Good questions. Here are a couple of suggestions before you take any particular action--

1. See what others have written on the issue of "connectivity scaling." You might start by reviewing the page or so that Hasselmo and Kapur wrote about scaling of connection probability and connection strength in their chapter "Modeling of Large Networks" in the book

Computational Neuroscience: Realistic Modeling for Experimentalists
Erik de Schutter, editor
CRC Press, 2002

If your institution's library offers digital access to this book, you can find the chapter here http://www.crcnetbase.com/doi/abs/10.12 ... 39290.ch11

Also you might want to see the discussion on scaling in p. 448 et seq. (about 4 small pages) in the chapter "Mean-field theory of irregularly spiking neuronal populations and working memory in recurrent cortical network" by Renart, Brunel, and Wang in the book

Computational Neuroscience: A Comprehensive Approach
Jianfeng Feng, editor
CRC Press, 2003

You can get this chapter's pdf from http://www.cns.nyu.edu/wanglab/publicat ... er2003.pdf, and I'd be surprised if the other chapter says as much about the topic.

2. To the extent possible, take advantage of linear synaptic models, like ExpSyn and Exp2Syn, or the potentiating Gsyn and saturating AMPA_S model described in chapter 10 of the NEURON Book. A single instance of such a synaptic model can accept multiple input streams, i.e. be the target of spike events generated by multiple presynaptic spike sources, which is far more efficient than having multiple instances of the same model synapse next to each other, each one receiving its own input stream.

3. If your model cell has significant biophysical and anatomical complexity, you're going to have to add a lot of synapses before run times will start to suffer.
cafischer wrote:1. Compute spike trains for every synapse.
2. Compute the conductance of each time step per synapse (before simulation). Using e.g. G = weight * factor * (exp(-t/tau2) - exp(-t/tau1))
3. Adding the conductances.
4. Calculate the current during the simulation using a point process with the formula: i = G * (v - e).
Doesn't sound like a useful investment of your time. First, the calculated current is correct only if the synapses have no effect on membrane potential, and if that's true, why bother having synapses? Second, even if the calculated current were "kind of sloppily more or less correct on the average, if you don't look too close", it omits the effect of the localized synaptic conductance changes on the behavior of the innervated neuron.
ted
Site Admin
Posts: 6299
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Reduction of the Number of Synapses

Post by ted »

To the extent possible, take advantage of linear synaptic models, like ExpSyn and Exp2Syn, or the potentiating Gsyn and saturating AMPA_S model described in chapter 10 of the NEURON Book. A single instance of such a synaptic model can accept multiple input streams, i.e. be the target of spike events generated by multiple presynaptic spike sources, which is far more efficient than having multiple instances of the same model synapse next to each other, each one receiving its own input stream.
To be a bit more explicit, suppose 50 identical GABA-A synapses are attached to the soma of a model cell, and each synapse is driven by a different presynaptic neuron. This could be represented efficiently by a single Exp2Syn that receives spike trains (delivered by NetCons) from 50 different presynaptic model neurons. If each NetCon has the same weight, and each presynaptic neuron's spiking is described by the negative exponential distribution and has the same average firing frequency f Hz (mean interspike interval = 1000/f), further simplification is possible: replace all of the NetCons and presynaptic neurons with a single NetCon driven by a single presynaptic neuron that spikes according to a negative exponential distribution with a mean firing frequency of 50*f (mean interspike interval = 1000/(50*f)).
cafischer

Re: Reduction of the Number of Synapses

Post by cafischer »

Thanks for the answer. That was exactly what I was looking for: A synapse that can take several inputs. I did not know that Exp2syn is already implemented to do that (i.e. to add the inputs linearly). And it is really faster than having several synapses with one input.
ted
Site Admin
Posts: 6299
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Reduction of the Number of Synapses

Post by ted »

Many event-driven synaptic mechanisms can be implemented in a way that allows a single mechanism instance to deal with multiple input streams. It is even possible to implement mechanisms that show stream-specific saturation, use-dependent plasticity (short term depression or facilitation), and spike-timing dependent plasticity. This is done by using elements of the NetCon's weight vector whose index is >0 to store stream-specific data such as time of previous synaptic activation, whether transmitter is present or absent, or the values of "potentiation state variables" that only have to be updated when a new event arrives. Chapter 10 of the NEURON book contains several examples.
Post Reply