Hello,
I'm having trouble outputting long strings of data from NEURON. I have been using the simulator to create raster plotz and cumulative histogram plots of cell populations and then using Vector: Save to File to selecting the histogram trace to output. This works well as long as my simulation trials are less than 3000000 time steps, or around 5 minutes using 0.1 dt. I would like to plot and record longer simulations but NEURON will shut down with the following error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): St9bad_alloc
/home/neuron/nrn/bin/network/x86_64/special: line 13:32620 Aborted
"${NRNIV}" -dll "/home/neuron/nrn/bin/network/x86_64/.libs/librnmech.so" "$@"
Thanks for any input you might have!
Plotting long periods of data
-
- Site Admin
- Posts: 6384
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Do you really need the moment-to-moment details of v? If you're primarily analyzing spike
times, why not just capture them with NetCon record? That should cut down data storage
and I/O by a couple orders of magnitude, and allow smaller dt (good for accuracy; 0.1 ms
is pretty coarse). If later you need to examine the detailed time course of any variable, this
can be generated by simulating a subnet that contains just the cell(s) of interest, driven by
spike events from a PatternStim object that uses the recorded spike times to recreate an
"afferent milieu" for the subnet.
times, why not just capture them with NetCon record? That should cut down data storage
and I/O by a couple orders of magnitude, and allow smaller dt (good for accuracy; 0.1 ms
is pretty coarse). If later you need to examine the detailed time course of any variable, this
can be generated by simulating a subnet that contains just the cell(s) of interest, driven by
spike events from a PatternStim object that uses the recorded spike times to recreate an
"afferent milieu" for the subnet.
In this day of gigabyte memory, a 3Meg
vector shouldn't be a problem unless your model is otherwise filling up memory or you have a lot of Vectors. I assume you are on linux.
You can get more failure details if you run
gdb nrniv
run -dll "the path you see in $CPU/special" how_you_start.hoc
and then on failure type:
where
I have a 2GB x86_64. If it does not
take too long to run and your machine is the same size, you can send me a zip file with all the hoc,ses,mod files needed and i'll see what precisely is going bad.
vector shouldn't be a problem unless your model is otherwise filling up memory or you have a lot of Vectors. I assume you are on linux.
You can get more failure details if you run
gdb nrniv
run -dll "the path you see in $CPU/special" how_you_start.hoc
and then on failure type:
where
I have a 2GB x86_64. If it does not
take too long to run and your machine is the same size, you can send me a zip file with all the hoc,ses,mod files needed and i'll see what precisely is going bad.
Here is my code for creating raster and cumulative histogram plots:
I use NetCon record to capture each individual cell and the output them into a raster plot and a cumulative histogram plot so that I can estimate total population activity. If there is a simpler way to record total cell population spike activity as one trace I would be happy to hear how but I was not able to figure out how to utilize NetCon for this task and cannot find a PatternStim explanation in the Programmers Reference. Thanks as always for your help.
Code: Select all
objref netcon, vec, spikes, nil, graster
objref vecs1[100], gg1, yy1
proc preprasterplot1() {
spikes = new List()
for i=0, hgcells.count()-1 {
vecs1[i] = new Vector(0)
vec = new Vector ()
hgcells.object(i).soma netcon = new NetCon(&v(0.5), nil, -10, 1, 0)
netcon.record(vec)
spikes.append(vec)
vecs1[i]=vec
}
gg1 = new Graph(0)
gg1.view(0,0,tstop,hgcells.count(), 1070, 264, 300.48, 200.32)
graster = new Graph(0)
graster.view(0,0,tstop,hgcells.count(), 1070, 524, 300.48, 200.32)
}
objref spikey
proc showraster1() {
gg1.erase_all()
graster.erase_all()
for i = 0, hgcells.count()-1 {
spikey = spikes.object(i).c
spikey.fill(i+1)
spikey.mark(graster, spikes.object(i), "|", 6)
}
yy1 = new Vector ()
for i=0, hgcells.count()-1 {
yy1=yy1.append(vecs1[i])
}
high = yy1.max
yy1 =yy1.histogram(0,tstop,binwidth)
yy1=yy1.c(1)
yy1.line(gg1, yy1.c.indgen(0, binwidth), 1, 1)
}
Additionally, I have tried the error checking and when I run -dll with the path to special and init.hoc I am first told, in regards to the path to special, " invalid ELF header" and then the program runs until it errors attempting to insert a mod file which had already been complied and states "detaching after fork from child process 16252"
-
- Site Admin
- Posts: 6384
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
You could just capture all spike times to a single pair of Vectors.tsa wrote:a simpler way to record total cell population spike activity
Assuming that all cells of interest are biophysical model cells implemented as objects
whose objrefs have been appended to a List called allcells, then after the net has been
set up
Code: Select all
objref nil, tvec, idvec
proc recspikes() { local ii localobj nc_
tvec = new Vector()
idvec = new Vector()
for ii=0, allbpcells.count()-1 {
allcells.o.(ii).soma nc_ = new NetCon(&v(0.5), nil)
. . . set threshold to whatever . . .
nc_.record(tvec, idvec, ii+1) // ii+1 so first cell's spikes will be marked at y==1
}
}
recspikes()
Code: Select all
objref gr
gr = new Graph()
idvec.mark(gr, tvec, "|")
and scaling.
Hello,
I have checked through my code and implemented your more efficient spike time capture method successfully.
However, I still get the same bad_alloc error when, and only when, I include code for creating cumulative spike histograms in long simulations. I am able to run both long simulations which output only raster plots or shorter (5 minutes of simulated time) trials which output raster plots and cumulative spike histograms. I am wondering if
The segment of code which gives me errors is quite simple & uses features available with the vector and graph classes:
At this point I am lost as to what the problem could be. Thanks again for you assistance
I have checked through my code and implemented your more efficient spike time capture method successfully.
However, I still get the same bad_alloc error when, and only when, I include code for creating cumulative spike histograms in long simulations. I am able to run both long simulations which output only raster plots or shorter (5 minutes of simulated time) trials which output raster plots and cumulative spike histograms. I am wondering if
The segment of code which gives me errors is quite simple & uses features available with the vector and graph classes:
Code: Select all
graph1 = new Graph()
binwidth=10
yy = new Vector()
yy=yy.append(tvec)
yy=yy.histogram(0,tstop,binwidth)
yy.line(graph1, yy.c.indgen(0,binwidth),1,1)
gdb pointed out the error.
The old gnu library I am using for histograms uses a short int for
the histogram size so no histogram can be larger than 32K.
Actually, though, that is not the crazy part. I see that the
implementation of the
histogram item addition is to do a linear search over the bins
under the assumption that the bins are not uniform. So the time to
fill the histogram is quadratic, proportional to the number of items *
number of bins. This was obviously intended for histograms with a modest
number of bins if not for negexp distributions.
I'll re-implement the histogram to allow int size and
random access for item addition (assumption that all bins are same
size).
The old gnu library I am using for histograms uses a short int for
the histogram size so no histogram can be larger than 32K.
Actually, though, that is not the crazy part. I see that the
implementation of the
histogram item addition is to do a linear search over the bins
under the assumption that the bins are not uniform. So the time to
fill the histogram is quadratic, proportional to the number of items *
number of bins. This was obviously intended for histograms with a modest
number of bins if not for negexp distributions.
I'll re-implement the histogram to allow int size and
random access for item addition (assumption that all bins are same
size).