Variable time step with heterogeneous synaptic delays

Moderator: wwlytton

Post Reply
bmerrihort
Posts: 3
Joined: Thu Apr 16, 2015 8:01 am

Variable time step with heterogeneous synaptic delays

Post by bmerrihort »

Hi,

I am currently attempting to re-implement a reasonably large model (1.5k HH-type point neurons, ~80k synapses) in NEURON. The main behaviour of the network features highly rhythmic synchronous spiking, and we therefore get nice performance increases by using a variable step size in the existing implementation (custom C code with the GNU scientific library). For this reason, and because it's good to be able to specify error bounds, I'm really keen to continue using a variable step method in NEURON.

Things were going great until I added in synaptic delays that varied according to the distance between cells, essentially giving a different delay to each connection. For a uniform synaptic delay performance is good, but with the distance dependent delay simulations take something like 20x longer to run - and are much slower than the original model implementation. From looking at the Programmer's Guide I think I can see why this is - every spike essentially generates many events at different times (one for each synapse), each of which causes CVode to reset and hunt for the precise event time.

I'm just wondering if anyone else has faced this problem and come up with a solution? I can envisage something whereby each neuron receives a spike at the same time and stores the spike time (modified with the specific delay for that synapse) in a list, and then calculates synaptic current at each time step by processing its list of received spikes (subject to some maximum cut-off time). This is similar (although not quite as efficient) as how I implemented it in my previous software. Does it seem like a viable approach in NEURON?
ted
Site Admin
Posts: 6289
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Variable time step with heterogeneous synaptic delays

Post by ted »

With global variable time steps, the cell in which things are changing most rapidly dictates the time step that is used for all cells. Consequently asynchrony is the enemy of simulation speed. You might get some small benefit from using local variable time steps. If you really need speedup, abandon adaptive integration and parallelize your model.
bmerrihort wrote:From looking at the Programmer's Guide I think I can see why this is - every spike essentially generates many events at different times (one for each synapse), each of which causes CVode to reset and hunt for the precise event time.
You're misinterpreting the discussion of "Events"
http://www.neuron.yale.edu/neuron/stati ... tml#events
"Event hunting" happens only if user-written code explicitly specifies an abrupt change of some parameter or variable, as in
if (t=t1) celsius = celsius + 10
or
if (vpre>vthresh) gsynpost = 2*gsynpost
It doesn't happen if such changes are implemented through hooks into NEURON's event delivery system that are provided by hoc or NMODL, e.g. cvode.event, netcon.event, or statements in a NET RECEIVE block, nor will it happen if your network is implemented by using NetCons to connect spike sources to synaptic targets.
I can envisage something whereby each neuron receives a spike at the same time and stores the spike time (modified with the specific delay for that synapse) in a list, and then calculates synaptic current at each time step by processing its list of received spikes (subject to some maximum cut-off time).
A nonsolution to a nonproblem. The problem is simply that asynchronous spiking necessarily kills performance when global variable time step integration is used.
bmerrihort
Posts: 3
Joined: Thu Apr 16, 2015 8:01 am

Re: Variable time step with heterogeneous synaptic delays

Post by bmerrihort »

Hi Ted and thanks for the quick reply.
ted wrote:With global variable time steps, the cell in which things are changing most rapidly dictates the time step that is used for all cells. Consequently asynchrony is the enemy of simulation speed.
Agreed, and since my model contains most of its firing in short windows of time I've found that I get very drastic speed increases with variable step. The following three figures (from my existing simulator I'm afraid, not proficient enough to produce such plots from NEURON yet) show this:
Image
A) Shows a simulation with variable step, which took 61s to run.
B) Shows a simulation with the step size fixed at 0.1ms (the minimum chosen by the variable step solver in A) - it took 150s to run.
C) Shows another variable step simulation, but this time with all synaptic delays constant - it took 59s to run. The delays all being equal does make spiking slightly more synchronous, and therefore makes things run a tiny bit faster, but the difference is negligible.

(Edit: whoops - looks like I used constant delays for the fixed step figure. This doesn't change anything though.)
ted wrote: You might get some small benefit from using local variable time steps.
Local time steps would probably be perfect for this, although when I quickly tried it there was an error that I think might have been due to the model making use of gap junctions. That's a problem for another day though...
ted wrote: If you really need speedup, abandon adaptive integration and parallelize your model.
It's important to us to run batches of simulations using a bunch of generated networks, so that we can get some idea of how the statistics we measure vary from simulation to simulation. This means that I run multiple simulations at once (one per core), and therefore don't think I'd get a huge benefit from parallelizing the individual simulations. I'm really just trying to get it to run as fast as possible on a single core.
ted wrote: You're misinterpreting the discussion of "Events" (...)
Then I am quite puzzled about what's going on. If I run the above model in NEURON with a variable step size and all synaptic delays equal to 1ms it takes 112s to simulate 200ms. However, if I allow the delays to vary from synapse to synapse it takes half an hour! Using the fixed step solver, the simulation time is almost identical in both cases. Since I guess my original idea for what was wrong was incorrect, I'll continue trying to dig into what the problem is. If you have any suggestions for where to look first, they'd be very welcome.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Variable time step with heterogeneous synaptic delays

Post by hines »

Your fig A and C take 61s and 59s respectively. The variable dt pattern is very similar. Then you say:
variable step size and all synaptic delays equal to 1ms it takes 112s
delays to vary from synapse to synapse it takes half an hour
If you plot the variable dt pattern for the half hour sim, maybe the sim no longer synchronizes and there are no intervals with long time steps.
bmerrihort
Posts: 3
Joined: Thu Apr 16, 2015 8:01 am

Re: Variable time step with heterogeneous synaptic delays

Post by bmerrihort »

hines wrote:Your fig A and C take 61s and 59s respectively. The variable dt pattern is very similar. Then you say:
variable step size and all synaptic delays equal to 1ms it takes 112s
delays to vary from synapse to synapse it takes half an hour
Sorry for the confusion. The plots (and times quoted) at the top of my post are from my existing simulator, which uses the adaptive RKF45 solver from the GNU Scientific library. I wanted to demonstrate that heterogenic synaptic delays shouldn't make a big impact on the choice of step size. When I run the same model in NEURON, I get 112ms vs 30 minutes depending on whether or not all the synaptic delays are equal.
hines wrote: If you plot the variable dt pattern for the half hour sim, maybe the sim no longer synchronizes and there are no intervals with long time steps.
We're pretty confident that this is a stable regime - we have run simulations that continue for a very long time and the pattern continues.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Variable time step with heterogeneous synaptic delays

Post by hines »

I notice in your plots that the minimum step size is .1ms . Does this mean that spikes are forced onto the resolution time step boundaries?
Perhaps it would be helpful for you to send me the NEURON code needed to reproduce the half hour run (all the hoc, ses, py, mod) files in a zip file to
michael dot hines at yale dot edu. That will allow me to see the time step values view the spike times.
Post Reply