Hf Stimulus Network Performance: Event Delivery or Pointers?

Moderator: wwlytton

Post Reply
Raj
Posts: 219
Joined: Thu Jun 09, 2005 1:09 pm
Location: Hanze University of Applied Sciences
Contact:

Hf Stimulus Network Performance: Event Delivery or Pointers?

Post by Raj » Thu Aug 25, 2005 7:41 am

Dear Forum,

Background:
I have been working on a large network of
1028 3 compartment pyramidal neurons +
256 1 compartment interneurons.

The model uses the event system for communication between the neurons. Because the synapses are non-linear, a synapse is needed for every connection. Two cells have (per direction) a 60% chance of being connected, making the number of connections and synapses ~0.6*1284^2.

Furthermore there is a 1000 Hz poissonnian input to every cell.

Problem:
In 100 ms real time, (with release:rel5_7_159, scrn updat invl=100, use_local_dt=on), it simulates 2.5 ms (simulated time). Which is to slow for pleasant model exploration on human timescales.

Analysis:
Because the event delivery needs to sort the events in the event queue, which is a global queue, one can expect a computational burden from just maintaining the event queue. Furthermore because of the highfrequency stimulation cvode is retreating to small time steps every ms. From these two factors the second factor is probably more important than the first.

Maybe performance improvement could be reached using synapses that respond in a graded way to the presynaptic voltage, i.e. they inject a current depending on presynaptic voltage rather than reacting to events and using the event delivery system. (There are several examples of such synapses on ModelDB.) The price to pay for moving to pointers seems to me that you can no longer use_local_dt and that the numerical method can no longer assume a tree topology for the cell. If, however, a modified netstim would generate random time series but with every spike smoothed by for example a gaussian, cvode's minimal (global) timestep might be larger in this high frequency stimulation regime.

I'm considering to rewrite my code to try and see what happens. But before I take that decision I would like to know whether any body knows or can argue why this is or isn't worth trying.

wwlytton
Posts: 64
Joined: Wed May 18, 2005 10:37 pm
Contact:

local variable time step with large simulations

Post by wwlytton » Thu Aug 25, 2005 10:48 am

With this dense connectivity and large amount of random input, you are likely not getting much benefit from using local variable time step and will prob see faster simulation with the implicit method. On the other hand if you are interested in issues such as synchronization, the implicit method will force everything into time step boundaries and may give you artificial synchronization base on this "sampling rate".

I'm not clear how you view the biological validity of your proposal to use a graded PSP based on presynaptic voltage. The notion of the NetCon arises from the concept that axonal communication is all or none.

With regard to your statement regarding smoothing NetStim output by a Gaussian, I am again a little confused. The NetStim output is all-or-none. There is then a smoothing convolution produced based on the postsynaptic mechanism's waveform. The frequencies of this postsyn mech will determine the time step that CVODE needs to select.

Bill Lytton

for a paper that touches on artificial sync'ing due to fixed dt see

author = "Hansel, D and Mato, G and Meunier, C and Neltner, L",
title = "On numerical simulations of integrate-and-fire neural networks",
journal = "Neural Computation",
year = "1998",
volume = "10",
pages = "467-483",

hines
Site Admin
Posts: 1577
Joined: Wed May 18, 2005 3:32 pm

Post by hines » Thu Aug 25, 2005 11:01 am

I'd recommend you do some profiling to find out what the rate limiting parts of the simulation are.
You are right that cvode may be way too fastidious given the rate of synaptic input but that just means that if the fixed step method is faster
and accurate enough, switch to that. Also try the
local variable time step method. If all your
delays are the same (or if there are only a few
sets of distinct delays) then the event queue
can be reimplemented to be an order of magnitude faster. But what is the point if it represents 0.1% of the simulation time. From your comments, I hypothesize that the synaptic
equation computation is the rate limiting step but without a clear profiling demonstration of that it is hard to focus on any particular thing.

ted
Site Admin
Posts: 5591
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

synaptic mechanism's response to a stream of events

Post by ted » Thu Aug 25, 2005 12:12 pm

wwlytton wrote:The NetStim output is all-or-none. There is then a smoothing convolution produced based on the postsynaptic mechanism's waveform.
Right. An event stream amounts to a sequence of Dirac delta (impulse) functions.
The target (postsynaptic mechanism) has its own unit impulse response (what it
does when driven by a single impulse function with amplitude 1). When a stream of
events hits a target that has no "memory" (use-dependent or other plasticity), what
you get is the convolution of the delta functions with the target's unit impulse response.

More complex synaptic mechanisms have been implemented that show various kinds
of plasticity (use-dependent, Hebbian etc.); obviously, the output of such mechanisms
is not a simple convolution of input stream with impulse response.

Raj
Posts: 219
Joined: Thu Jun 09, 2005 1:09 pm
Location: Hanze University of Applied Sciences
Contact:

Re: local variable time step with large simulations

Post by Raj » Thu Aug 25, 2005 1:00 pm

Dear Bill, Ted, Hines,

I tried to use a fixed timestep but found the behaviour to be qualitatively different from the variable timestep, which makes me hesitant about using it, but the reference passed may give clues whether switching to fixed timestep is a viable option.

The validity of graded PSP's is not so much in the biology as in the comparison with the model of Tegner, Compte and Wang (Biol. Cybern. 87 ,471-481 (2002) which I'm trying to convert into NEURON to act as a start of my own model. Their model has graded synapses, with a sigmoid presynaptic voltage dependence making it close to all or none. I would actually prefer to stay with the event delivery system, but if a large computational benefit would result from changing I would consider changing.

The NetStim is all or none, and with calling the alternative solution an alternative netstim I seem to have put you on the wrong footing. Instead of netstim one could think of creating a mechanism for generating a train of gaussians to be fed into a graded synapse. Such a stimulus would not be a generator of events but a generator of an auxilary presynaptic membrane potential which can be coupled to a target cell through the appropriate pointers. If the width of the gaussian is much smaller then the interspike interval say 1% ( 10-2 ms at 1000 Hz) it might still be larger than the intervals of 10-5 ms or 10-9 ms I see passing when I run my simulations now. So although the frequency still puts limits on the stepsize the drop back to extremely small values might be reduced, but I'm afraid I'm speculating here.

Anyway before trying to carry you further away in these speculations about graded synapses, I should probably first act on Hines advice and try profiling. Which if I'm right requires recompilation of neuron with -pg compiler flag set (gnu compiler) and use of the gprof tool.

Thanks to you all and I hope to come back with the profiling outcome and some new insights,
Raj

ted
Site Admin
Posts: 5591
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

event delivery system vs. pointers for synaptic connectivity

Post by ted » Thu Aug 25, 2005 1:10 pm

Raj wrote:The price to pay for moving to pointers
could be huge.

One thing we know for sure: the overhead for numerical integration is a lot more
than the overhed of the event delivery system. If you need to be convinced, compare
run times for a net of integrate and fire cells vs. a net with the same architecture
that uses single compartment hh neurons connected by NetCons and ExpSyns.
Try to contrive the nets so the number of delivered events is the same (the simplest
architecture would be a fan-out net with a single spontaneously active cell that drives
a bunch of target cells, none of which talk to each other).

What is gained by implementing a model that avoids use of the event delivery
system? We won't have the event delivery system's overhead.

But what is lost? We won't be able to have conduction delay and synaptic
latency unless we simulate axonal spike propagation in the presynaptic cell.
So we'd have to add a lot of nodes with hh (or some other spike mechanism).
You didn't say how many voltage-gated currents are in your cells, but assuming
a complexity similar to hh, every axon node will add as much overhead as one of
your one-compartment cells, or 1/3 of a 3-compartment cell.

So unless we want to sacrifice conduction delay and synaptic latency (given the
importance of delays in networks, this seems like a bad idea), we're stuck with
a much bigger task, and simulations will execute far more slowly.

wwlytton
Posts: 64
Joined: Wed May 18, 2005 10:37 pm
Contact:

Wang papers

Post by wwlytton » Fri Aug 26, 2005 10:02 am

Mike HInes and I ported the Wang and Buzsaki model to Neuron. This model uses the same the continuous connectivity with pointers the you mention. We achieved considerable speed up by moving it over to NetCon's.

In general, you cannot expect to see identical network simulations with 2 different integrators or even with 2 different time steps with the same integrator. The reason being that the slippage of single spikes by a few microseconds will have cascading effects in the network. There are a variety of issues with network accuracy which are not easily resolved. In general, I strive for qualitative agreement when I want to assure myself that a network is behaving reasonably under 2 different integrators.

btw, make sure you are using cvode.condition_order(2) with the variable time step method.

Another paper about accuracy in networks is:

author = "Shelley, MJ and Tao, L",
title = "Efficient and accurate time-stepping schemes for integrate-and-fire
neuronal networks",
journal = "Journal of Computational Neuroscience",
year = "2001",
volume = "11",
pages = "111-119",

Raj
Posts: 219
Joined: Thu Jun 09, 2005 1:09 pm
Location: Hanze University of Applied Sciences
Contact:

Hf Stimulus Network Performance: Event Delivery or Pointers?

Post by Raj » Mon Aug 29, 2005 7:43 am

One thing we know for sure: the overhead for numerical integration is a lot more than the overhed of the event delivery system. If you need to be convinced, compare run times for a net of integrate and fire cells vs. a net with the same architecture that uses single compartment hh neurons connected by NetCons and ExpSyns.
I agree if a network consists of abstract `exactly solvable' neurons and the event delivery system is used we get extremely fast simulations. The simulations I'm doing are combining the event delivery system for communication with numerical integration for the neurons. Given the high frequency stimulation regime every instance of cvode is thrown back to an extremely small dt once every ms. It is this interaction of event delivery and cvode which might make the high frequency stimulation regime an exception to the rule.

The possibility to include delays, latencies and the all or none character of action potential generation are all natural reasons to chose the event delivery system and are the reasons why I would like to hang on to it.

ted
Site Admin
Posts: 5591
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

event delivery without adaptive integration

Post by ted » Mon Aug 29, 2005 11:31 am

Raj wrote:Given the high frequency stimulation regime every instance of cvode is thrown back to an extremely small dt once every ms.
Raj wrote:It is this interaction of event delivery and cvode which might make the high frequency stimulation regime an exception to the rule.
It isn't the event delivery system itself that is responsible for the small
dt, it's cvode's attempt to control local error. You can eliminate the latter
simply by turning adaptive integration off, and still take advantage of
the event delivery system's management of "synaptic latencies."

Raj
Posts: 219
Joined: Thu Jun 09, 2005 1:09 pm
Location: Hanze University of Applied Sciences
Contact:

Synaptic equation computation is the rate limiting step

Post by Raj » Fri Sep 02, 2005 8:45 am

As Hines pointed out and I now see confirmed the synaptic equation computation is the rate limiting step. I was not able to do the profiling yet, but was able to optimize the mechanisms.

For first order (GABA) synapses I could move the saturation mechanism from the DERIVATIVE block to the NET_RECEIVE block and then the administration for the saturation mechanism to NetCon leaving me with a state variable in which I can simply sum contributions from all connections. I borrowed this trick from Ted's tmgsyn.mod. So the number of GABA synapses now scales linearly with network size.

I did the same with the secondorder ampa/nmda synapses. These however contained two saturation mechanisms, one can be implemented in netcon the other I sacrificed for now, and I'm trying to establish the impact of this change.

With these changes the estimated time for a full (6s) run is now 4-5 hours which is a lot but doable.
So as the number of CVode instances hasn't changed nor the number of events I have to admit that the interaction of the two was not all that important.

Raj
Posts: 219
Joined: Thu Jun 09, 2005 1:09 pm
Location: Hanze University of Applied Sciences
Contact:

Example of a linearized synapse

Post by Raj » Mon Sep 26, 2005 12:34 pm

In an other thread a good solution appeared for linearizing a saturating synapse, which I like to add here for completeness:

https://www.neuron.yale.edu/phpBB2/view ... light=ampa

The temporal properties are however quite different from the synapse in the Tegner, Compte and Wang (Biol. Cybern. 87 ,471-481 (2002) mentioned before.

The previous posting of course is proof that if you don't read the NEURON book, you are bound to reinvent some wheels.

Post Reply