Variable time step with heterogeneous synaptic delays
Posted: Fri Apr 17, 2015 6:33 am
Hi,
I am currently attempting to re-implement a reasonably large model (1.5k HH-type point neurons, ~80k synapses) in NEURON. The main behaviour of the network features highly rhythmic synchronous spiking, and we therefore get nice performance increases by using a variable step size in the existing implementation (custom C code with the GNU scientific library). For this reason, and because it's good to be able to specify error bounds, I'm really keen to continue using a variable step method in NEURON.
Things were going great until I added in synaptic delays that varied according to the distance between cells, essentially giving a different delay to each connection. For a uniform synaptic delay performance is good, but with the distance dependent delay simulations take something like 20x longer to run - and are much slower than the original model implementation. From looking at the Programmer's Guide I think I can see why this is - every spike essentially generates many events at different times (one for each synapse), each of which causes CVode to reset and hunt for the precise event time.
I'm just wondering if anyone else has faced this problem and come up with a solution? I can envisage something whereby each neuron receives a spike at the same time and stores the spike time (modified with the specific delay for that synapse) in a list, and then calculates synaptic current at each time step by processing its list of received spikes (subject to some maximum cut-off time). This is similar (although not quite as efficient) as how I implemented it in my previous software. Does it seem like a viable approach in NEURON?
I am currently attempting to re-implement a reasonably large model (1.5k HH-type point neurons, ~80k synapses) in NEURON. The main behaviour of the network features highly rhythmic synchronous spiking, and we therefore get nice performance increases by using a variable step size in the existing implementation (custom C code with the GNU scientific library). For this reason, and because it's good to be able to specify error bounds, I'm really keen to continue using a variable step method in NEURON.
Things were going great until I added in synaptic delays that varied according to the distance between cells, essentially giving a different delay to each connection. For a uniform synaptic delay performance is good, but with the distance dependent delay simulations take something like 20x longer to run - and are much slower than the original model implementation. From looking at the Programmer's Guide I think I can see why this is - every spike essentially generates many events at different times (one for each synapse), each of which causes CVode to reset and hunt for the precise event time.
I'm just wondering if anyone else has faced this problem and come up with a solution? I can envisage something whereby each neuron receives a spike at the same time and stores the spike time (modified with the specific delay for that synapse) in a list, and then calculates synaptic current at each time step by processing its list of received spikes (subject to some maximum cut-off time). This is similar (although not quite as efficient) as how I implemented it in my previous software. Does it seem like a viable approach in NEURON?