There are several ways to generate random spike streams. One is to use a NetStim with its random parameter set to a value in the range 0 < random <= 1 (be sure to read the programmer's reference documentation of the NetStim class to find out what this does).
But what if you need more than one spike stream? Can you just add another NetStim and grind out spike times?
Not if you want your NetStims' streams to be independent of each other. Read on to find out why, and what to do about it.
The problem
Consider this example: a 20 ms simulation of two NetStims, each of which has the following parameters
interval = 10 ms number = 10 start = 1 ms noise = 1
produces events at these times
time cell 1.347 1 6.015 0 10.123 0 12.628 1 13.663 1 14.913 0
But if cell 1's start time is 6 ms, the results are
time cell 6.015 0 6.347 1 10.456 1 15.245 1 16.281 1 17.295 0
which demonstrates that the two event streams are not independent of each other.
This is because the two NetStims are drawing values from the same random number generator. Change any parameter that affects one--interval, start time, number, or noise--and you'll also affect the other, whether you want to or not.
And usually you don't want to, because at the very least such side effects introduce confounds that can interfere with interpretation of experimental manipulations. Suppose you have a model cell that receives multiple afferent input streams, and you want to see what happens if you delay the onset of one of the afferent spike trains, or if the mean firing frequency of an inhibitory afferent changes. Well, you're out of luck, because any perturbation to one afferent train is going to perturb all afferent trains.
But if you don't change the NetStims' parameters, everything will be OK, right?
Right, until you decide to parallelize your model by using MPI to distribute it over multiple processors. At that point you're going to discover that randomization of model setup and randomization of simulation execution will make model parameters and/or simulation results depend on the number of processors and how your model's cells are distributed over the processors. This is most undesirable, because an essential test of the parallel implementation of a model is the demonstration that it produces the same results as the serial implementation. If the parallel and serial implementations produce different results, then something is broken, and the parallel implementation cannot be a reliable surrogate for the serial implementation.
Fortunately, a particular strength of NEURON is that it enables you to parallelize models in such a way that the parallel code produces the same results as the serial code does, regardless of whether the parallel code is being executed on serial or parallel hardware, or the number of processors that are available on the parallel machine, or how the model is distributed over the parallel machine's processors.
So what's the solution to our current problem? How can we keep our NetStims from using a shared random number generator? Read on to discover the answer.