Hi all.
We are trying to convert an older model to work in parallel. The network consists of 4 neurons connected to each other. The connections from the source to the target cells is performed by
1) assigning a gid to each cell via ParallelContext.set_gid2node(gid, ParallelContext.id)
2) connecting the source cell to a nil-connected netcon
3) calling ParallelContext.cell(source-cell-gid, netcon, 1)
4) in the machine containing the target neuron, doing ParallelContext.gid_connect(source-cell-gid, ampa), where ampa is a POINT_PROCESS
I would like to ask a few questions:
1) stdinit() does not exist in neuron version 7.1 that we use, what is the function to initialize the simulation? (we are calling finitialize())
2) The model crashes with a segmentation fault at ParallelContext[0].set_maxstep(10) when run in parallel. I have figured out that if i remove all the gid_connect() calls it can work, but nothing further than that. Also if i dont initialize the model with finitialize(), the call to set_maxstep() will hang forever. Could it have to do with the fact that the gid_connect() targets are POINT_PROCESSes? I have checked that the POINT processes actually exist as targets in the machine when they are being gid_connected()
I would appreciate any insight about why this is happening.
Thanks
Segmentation fault on set_maxstep()
Moderator: hines
-
- Site Admin
- Posts: 6300
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: Segmentation fault on set_maxstep()
Doesn't sound like enough of a network to require or benefit from distributing over multiple processors with MPI, without also requiring multisplit to achieve balance. If you have multicore machines and the model cells are sufficiently complex, multithreaded multisplit simulation might be helpful, and could be done with much less effort on your part (unless something in your model is inherently not threadsafe, in which case you're stuck). Another alternative, if you need to execute many runs, is bulletin-board-style parallelization, which requires relatively minor changes to serial source code.grnavigator wrote:We are trying to convert an older model to work in parallel. The network consists of 4 neurons connected to each other.
But in case none of these alternatives is possible . . .
Really? There is no stdrun.hoc in nrn/lib/hoc ? Or there is, but it doesn't define a proc stdrun.hoc()? Here's what it looks like in the most recent 7.2stdinit() does not exist in neuron version 7.1 that we use
Code: Select all
proc stdinit() {
cvode_simgraph()
realtime = 0
setdt()
init()
initPlot()
}
Code: Select all
proc stdinit() {
realtime = 0
startsw()
setdt()
init()
initPlot()
}
Code: Select all
proc init() {
finitialize(v_init)
fcurrent()
}
It can work without any connections between spike sources and spike targets?The model crashes with a segmentation fault at ParallelContext[0].set_maxstep(10) when run in parallel. I have figured out that if i remove all the gid_connect() calls it can work
Good, since the model would not have been initialized.if i dont initialize the model with finitialize(), the call to set_maxstep() will hang forever
Not if the source code for the targets contains NET_RECEIVE blocks.Could it have to do with the fact that the gid_connect() targets are POINT_PROCESSes?
That's good.I have checked that the POINT processes actually exist as targets in the machine when they are being gid_connected()
On the off chance that you're running into a bug in 7.1, is there any way you could give the most recent alpha version of 7.2 a try? Or, better yet, the latest development code from the mercurial repository?
-
- Posts: 2
- Joined: Tue Feb 16, 2010 7:37 am
Re: Segmentation fault on set_maxstep()
Ted,
Thanks for your prompt response. It helped a lot to fix our problems. Turns out the problem was that some delays in the NetCons were smaller than the dt value and set_maxstep() would hang. The model is indeed small now, but the plan is to expand it to hundreds of neurons in a large cluster. Thanks again for your help
- George
Thanks for your prompt response. It helped a lot to fix our problems. Turns out the problem was that some delays in the NetCons were smaller than the dt value and set_maxstep() would hang. The model is indeed small now, but the plan is to expand it to hundreds of neurons in a large cluster. Thanks again for your help
- George
Re: Segmentation fault on set_maxstep()
stdinit() from {load_file("nrngui.hoc")} wraps the call to finitialize(v_init)
along with some housekeeping for plotting. You can get by with a direct
call to finitialize().
If an interprocessor NetCon.delay is < dt then you should get an error message like:
0 nrniv: mindelay is 0 (or less than dt for fixed step method)
when pc.psolve(tstop) is called. pc.max_step(10) should not hang. Since it does hang with
your model, can you send all the hoc,ses,mod files to me in a zip file so I can reproduce the error
(how many processors are you using and what is your launch command?)
and either fix the bug or provide a suitable error message? Send to michael dot hines at yale dot edu
along with some housekeeping for plotting. You can get by with a direct
call to finitialize().
If an interprocessor NetCon.delay is < dt then you should get an error message like:
0 nrniv: mindelay is 0 (or less than dt for fixed step method)
when pc.psolve(tstop) is called. pc.max_step(10) should not hang. Since it does hang with
your model, can you send all the hoc,ses,mod files to me in a zip file so I can reproduce the error
(how many processors are you using and what is your launch command?)
and either fix the bug or provide a suitable error message? Send to michael dot hines at yale dot edu