Race condition with multiple artificial cell instances acces
Posted: Mon Aug 04, 2008 4:52 pm
I've written an artificial cell which generates spikes at times specified by an input data file.
The NMODL code just reads spike times out of a file, and calls net_send() once for each spike.
All of this happens at initialization time -- there's no actual NMODL code that executes
during the simulation run. All the heavy lifting is done by NEURON's event delivery system,
and the artificial cell does nothing but load up the event queue with spike times.
Here's the problem, though.
The model includes multiple instances of this artificial cell type.
Nothing, in principle, prevents multiple instances from reading the data file simultaneously.
On a single-cpu system, the INITIAL blocks of the cells probably execute one at a time,
but that seems like a pretty fragile way to prevent a race condition.
Moreover, the model will be run in threaded NEURON someday, and that'll break it for sure.
I thought, at first, the solution would be to put the main part of the code
inside a PROCEDURE, (call it queue_spikes() ) rather than the INITIAL block.
Then, in hoc, or python, I could do something like this:
Unfortunately, one can't put a net_send inside a PROCEDURE
(only inside an INITIAL or a NET_RECEIVE block).
An alternative approach, avoiding the race condition would be to read
the file in hoc or Python and pass it to the artificial cells
via a FUNCTION_TABLE. In some respects, this is preferable. True, it'd involve some
computational overhead for interpolation, which is wasted in this case.
More importantly, however, If I understand
correctly, this approach will be very memory inefficient -- i.e. all the spike times
will be kept in memory in the FUNCTION_TABLE *after* they've been copied into the event queue
and are no longer needed. (Essentially, there'll be two, redundant copies of the same data,
one in the FUNCTION_TABLE, and the other in the event queue.)
There are about 100,000 spikes, and someday, when the network's upscaled, there'll be more, so I think
memory efficiency's a legitimate consideration, here.
Can I attach vectors to a FUNCTION_TABLE, use the FUNCTION_TABLE, and then delete the vectors?
Obviously, I have to make sure my NMODL code never calls the FUNCTION_TABLE again after the
vectors have been deleted, but, assuming my program logic guarantees that, am I safe with this maneuver?
... or is there some better way of doing this that I'm overlooking?
The NMODL code just reads spike times out of a file, and calls net_send() once for each spike.
All of this happens at initialization time -- there's no actual NMODL code that executes
during the simulation run. All the heavy lifting is done by NEURON's event delivery system,
and the artificial cell does nothing but load up the event queue with spike times.
Code: Select all
NEURON {
ARTIFICIAL_CELL testRead
RANGE cellID
}
PARAMETER {
cellID = 0
}
ASSIGNED {
q : remember -- this is a double, not a float
qq
ret
}
INITIAL {
VERBATIM
FILE *fp;
fp = fopen("/usr/me/testData.txt","r");
printf("testRead opened file\n");
ENDVERBATIM
ret = 1
WHILE (ret == 1) {
VERBATIM
ret = fscanf(fp,"%lf",&q); //lf, not f, 'cause q is a double, not a float
ret = fscanf(fp,"%lf",&qq);
ENDVERBATIM
IF (ret == 1) {
printf("testRead read %lf,%lf from file\n",q,qq)
IF (qq == cellID) {
net_send(q,99)
}
}
}
VERBATIM
fclose(fp);
printf("testRead closed file\n");
ENDVERBATIM
}
NET_RECEIVE (w) {
IF (flag == 99) {
net_event(t)
printf("testRead event at t= %g\n",t)
}
}
The model includes multiple instances of this artificial cell type.
Nothing, in principle, prevents multiple instances from reading the data file simultaneously.
On a single-cpu system, the INITIAL blocks of the cells probably execute one at a time,
but that seems like a pretty fragile way to prevent a race condition.
Moreover, the model will be run in threaded NEURON someday, and that'll break it for sure.
I thought, at first, the solution would be to put the main part of the code
inside a PROCEDURE, (call it queue_spikes() ) rather than the INITIAL block.
Then, in hoc, or python, I could do something like this:
Code: Select all
for cell in artCells: cell.queue_spikes()
(only inside an INITIAL or a NET_RECEIVE block).
An alternative approach, avoiding the race condition would be to read
the file in hoc or Python and pass it to the artificial cells
via a FUNCTION_TABLE. In some respects, this is preferable. True, it'd involve some
computational overhead for interpolation, which is wasted in this case.
More importantly, however, If I understand
correctly, this approach will be very memory inefficient -- i.e. all the spike times
will be kept in memory in the FUNCTION_TABLE *after* they've been copied into the event queue
and are no longer needed. (Essentially, there'll be two, redundant copies of the same data,
one in the FUNCTION_TABLE, and the other in the event queue.)
There are about 100,000 spikes, and someday, when the network's upscaled, there'll be more, so I think
memory efficiency's a legitimate consideration, here.
Can I attach vectors to a FUNCTION_TABLE, use the FUNCTION_TABLE, and then delete the vectors?
Obviously, I have to make sure my NMODL code never calls the FUNCTION_TABLE again after the
vectors have been deleted, but, assuming my program logic guarantees that, am I safe with this maneuver?
Code: Select all
from numpy import *
import neuron
import nrn
h = neuron.h
h.load_file("stdrun.hoc")
h.nrn_load_dll('./mod/i686/.libs/libnrnmech.so') # Assume this loads a mechanism myArtCell with a FUNCTION_TABLE named spkTimeFunc
spkTimes = readSpikeTimesFromFile("./parameters/spkTimes.txt") # Assume this returns a 1-d numpy array.
numSpks = len(spkTimes)
spkTimeVec = h.Vector()
spkTimeVec.from_python(spkTimes)
spkNumVec = h.Vector()
spkNumVec.from_python(arange(numSpks))
h.table_spkTimeFunc_myArtCell(_ref_spkTimeVec.x(0), numSpks, _ref_spkNumVec.x(0)) # Is the ".x(0)" really necessary? Seems a bit redundant.
h.stdinit()
del(spkTimeVec)
del(spkNumVec)
tstop = max(spkTimes)
h.continuerun(tstop)