ode solver in parallel NEURON
Posted: Wed Mar 28, 2007 1:12 pm
Hi,
I have a more general question on the ode solver in the ParalleNetManger environment in NEURON.
I have "parallelised" a network model and I am quite sure that all parameter in the serial and the parallel version are identical.
When I simulate the network using the parallel environment (running on a serial machine though!) the first 10 spikes (~50 ms) are very similar to the serial version, but after that the the two network trajectories diverge somehow, i.e. spike times of the neurons are different between the serial and the parallel version.
(In fact the the first 50 msec of the serial version are identical to the parallel version if I do NOT use the cvode in the serial version!)
Whereas if I repeat the parallel simulation I get exactly the same results for every parallel run. (The network I am using might show chaotic behaviour so, I wanted to make sure these differences are not due to minimal differences in initial conditions.)
For those who are interested can hava a look at a figure at picasa (use magnification tool to details in the figure):
http://picasaweb.google.com/ubartsch/Ne ... 1069324194
http://picasaweb.google.com/ubartsch/Ne ... 1069324178
My question is:
Is there a general difference between the serial and the parallel solver (apart from implementation) and can the divergence of the network trajectories (serial vs parallel code) be explained by this difference?
(set_max_step doesnt seem to have any effect on the result.)
And why is the simulation using the fixed step integration (no cvode) more similar to the parallel version?
Many thanks for a hint!
Cheers
Ullrich
I have a more general question on the ode solver in the ParalleNetManger environment in NEURON.
I have "parallelised" a network model and I am quite sure that all parameter in the serial and the parallel version are identical.
When I simulate the network using the parallel environment (running on a serial machine though!) the first 10 spikes (~50 ms) are very similar to the serial version, but after that the the two network trajectories diverge somehow, i.e. spike times of the neurons are different between the serial and the parallel version.
(In fact the the first 50 msec of the serial version are identical to the parallel version if I do NOT use the cvode in the serial version!)
Whereas if I repeat the parallel simulation I get exactly the same results for every parallel run. (The network I am using might show chaotic behaviour so, I wanted to make sure these differences are not due to minimal differences in initial conditions.)
For those who are interested can hava a look at a figure at picasa (use magnification tool to details in the figure):
http://picasaweb.google.com/ubartsch/Ne ... 1069324194
http://picasaweb.google.com/ubartsch/Ne ... 1069324178
My question is:
Is there a general difference between the serial and the parallel solver (apart from implementation) and can the divergence of the network trajectories (serial vs parallel code) be explained by this difference?
(set_max_step doesnt seem to have any effect on the result.)
And why is the simulation using the fixed step integration (no cvode) more similar to the parallel version?
Many thanks for a hint!
Cheers
Ullrich