gap junctions, pointers and multithreading
Moderator: hines
gap junctions, pointers and multithreading
I'm using nrn 7.0 and want to use several processors. By adding THREADSAFE to the mod files most compilations notifications are resolved. However, pointers are not threadsafe and i need pointers for the gab junction code i'm using. the gab junction is very basic (and probaly taken from another model):
NEURON {
...
POINTER vgap
THREADSAFE
}
BREAKPOINT {
i = (v - e) / r
}
(evidently that's not all the code for the gab juntion but highlights the use of the pointer)
and in the hoc code i add the gap junction as follows:
setpointer gapjunction[xpos][ypos][nn*2].vgap, interneuron[conx][cony].soma.v(.5)
Is there a way to (i) make the pijnter threadsafe or (ii) work around the pointer?
Thanks in advance, Ben
NEURON {
...
POINTER vgap
THREADSAFE
}
BREAKPOINT {
i = (v - e) / r
}
(evidently that's not all the code for the gab juntion but highlights the use of the pointer)
and in the hoc code i add the gap junction as follows:
setpointer gapjunction[xpos][ypos][nn*2].vgap, interneuron[conx][cony].soma.v(.5)
Is there a way to (i) make the pijnter threadsafe or (ii) work around the pointer?
Thanks in advance, Ben
Re: gap junctions, pointers and multithreading
Remember that adding THREADSAFE to a mod file does not make it so but is an assertion by the author that the model is, in fact, threadsafe, despite several potentially non-threadsafe issues noticed by the nmodl translator, and that, therefore, it is ok to force nocmodl to emit threaded code.
Sadly, gap junctions with POINTER are not threadsafe (except, possibly, if the simulation happens to be using multisplit). Flow control for a single dt integration step is, at the beginning of the step to divide into threads, each thread does a full step in parallel, then the threads come back together into a single main thread. Unfortunately, one thread may be near the end of its step and updating the voltage while another thread is using that voltage in the BREAKPOINT block. In fact, that would certainly be the case if one requested sequential threads (some of the pointer voltages would be from the beginning of the step and some from the end). The reason gaps would be safe when multisplit is used is because a single dt integration step is divided into several phases. Each phase divides into threads and come back together at the end of the phase (so the entire sim is ready to do the next phase). I.e threads are never doing different phases at the same time. Thus all the threads will finish the phase that calls the BREAKPOINT block before starting the phase that updates the voltage.
In thinking about this, I realize I have not made the the use of pc.(source_var target_var setup_transfer) threadsafe. When I do that it will be the proper way to implement gap junctions.
Sadly, gap junctions with POINTER are not threadsafe (except, possibly, if the simulation happens to be using multisplit). Flow control for a single dt integration step is, at the beginning of the step to divide into threads, each thread does a full step in parallel, then the threads come back together into a single main thread. Unfortunately, one thread may be near the end of its step and updating the voltage while another thread is using that voltage in the BREAKPOINT block. In fact, that would certainly be the case if one requested sequential threads (some of the pointer voltages would be from the beginning of the step and some from the end). The reason gaps would be safe when multisplit is used is because a single dt integration step is divided into several phases. Each phase divides into threads and come back together at the end of the phase (so the entire sim is ready to do the next phase). I.e threads are never doing different phases at the same time. Thus all the threads will finish the phase that calls the BREAKPOINT block before starting the phase that updates the voltage.
In thinking about this, I realize I have not made the the use of pc.(source_var target_var setup_transfer) threadsafe. When I do that it will be the proper way to implement gap junctions.
Re: gap junctions, pointers and multithreading
I forgot to mention that I believe (though am not certain) that the global variable step method is threadsafe when the gap.mod is using a POINTER. This is because a time step is also split into several phases that do not overlap. The local variable time step method is not threadsafe and would not even work correctly with a single thread.
Re: gap junctions, pointers and multithreading
you mean that when simulating a network with gap junction, the adaptive time step should always be non local, i.e. NOT cvode.use_local_dthines wrote:The local variable time step method is not threadsafe and would not even work correctly with a single thread.
(1)? Or is it a different message you bring across?
Ben
-
- Site Admin
- Posts: 6384
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: gap junctions, pointers and multithreading
He means that local variable time step does not work with multithreaded execution. Use global variable time step instead.
Re: gap junctions, pointers and multithreading
ok, but with a global timestep it doesn't work either because the pointer required in the gapjunction is not threadsafe. Any news idea on parallelization of networks with gap junctions? any way to get around this limitation of the current NRN parallelization? Is there a way to force to overrule the remark by NRN that pointers are not threadsafe (because Michel suspects that with a global time step pointers should be safe)
(and currently my network is running with local time steps because only a very few neurons are spiking at any given time, so the time step for most neurons is relatively large and very small for the ones that are spiking. it doesn't seem to have any numerical consequences... or is it just flirting with erroneous results?)
Ben
(and currently my network is running with local time steps because only a very few neurons are spiking at any given time, so the time step for most neurons is relatively large and very small for the ones that are spiking. it doesn't seem to have any numerical consequences... or is it just flirting with erroneous results?)
Ben
Re: gap junctions, pointers and multithreading
The local variable time step method is threadsafe in its restricted domain (networks of spike event coupled
cells). It cannot be used with multisplit or with gap junctions (even with a single thread).
NEURON { THREADSAFE } but be sure to compare with the serial version. You should get
quantitative identity regardless of number of threads if you test every once in a while with
cvode.use_long_double(1) which eliminates the problem of different round-off error due to
non-associativity of computer addition. (you'll need to clone the mercurial repository so you
are at or later than
http://www.neuron.yale.edu/hg/neuron/nr ... 73f7a0d688
)
But remember that Multithread + fixed step + gap junction pointers are NOT threadsafe.
The errors and thread differences would be small when using fadvance, but would be
horribly huge with ParallelContext.psolve() since in the former the pointer could vary
by just the value difference of a single time step but the latter could vary by the change
over an entire minimum NetCon delay interval.
When I get a chance (probably after SFN) I will make
http://www.neuron.yale.edu/neuron/stati ... p_transfer
threadsafe for fixed step and gvardt (not lvardt).
cells). It cannot be used with multisplit or with gap junctions (even with a single thread).
If you are ONLY going to use the global variable time step method, then it is worth tryingbut with a global timestep it doesn't work either because the pointer required in the gapjunction is not threadsafe
NEURON { THREADSAFE } but be sure to compare with the serial version. You should get
quantitative identity regardless of number of threads if you test every once in a while with
cvode.use_long_double(1) which eliminates the problem of different round-off error due to
non-associativity of computer addition. (you'll need to clone the mercurial repository so you
are at or later than
http://www.neuron.yale.edu/hg/neuron/nr ... 73f7a0d688
)
But remember that Multithread + fixed step + gap junction pointers are NOT threadsafe.
The errors and thread differences would be small when using fadvance, but would be
horribly huge with ParallelContext.psolve() since in the former the pointer could vary
by just the value difference of a single time step but the latter could vary by the change
over an entire minimum NetCon delay interval.
When I get a chance (probably after SFN) I will make
http://www.neuron.yale.edu/neuron/stati ... p_transfer
threadsafe for fixed step and gvardt (not lvardt).
Re: gap junctions, pointers and multithreading
Apologies for resurrecting this rather old thread, but it was only the third one in the forum... so yeah.
I was wondering what the status of this question given the recent release of the 7.0 version into the wild.
My model too uses quite a number of graduated synapses that require stepwise updates to function properly. Right now I am using the POINTER statement in their .mod files, but as the original poster said nrnivmodl complains that it's not thread safe.
Thusly, to my question. Is the POINTER block still safe to use if I restrict to using the global variable timestep? (Incidentally, does this equate to using the CVode on a single computer?)
If not, have the ParallelTransfer functions been updated to be threadsafe?
For the latter, how much setup does PC need for the parallel_transfer to work? Do I need to go through the whole runworker() business as per the usual parallel simulations, or will just doing this work:
Thanks.
I was wondering what the status of this question given the recent release of the 7.0 version into the wild.
My model too uses quite a number of graduated synapses that require stepwise updates to function properly. Right now I am using the POINTER statement in their .mod files, but as the original poster said nrnivmodl complains that it's not thread safe.
Thusly, to my question. Is the POINTER block still safe to use if I restrict to using the global variable timestep? (Incidentally, does this equate to using the CVode on a single computer?)
If not, have the ParallelTransfer functions been updated to be threadsafe?
For the latter, how much setup does PC need for the parallel_transfer to work? Do I need to go through the whole runworker() business as per the usual parallel simulations, or will just doing this work:
Code: Select all
objref pc
pc = new ParallelContext()
pc.source_var(&cell1.v(0.5), 1)
pc.target_var(&GABAaSyn.PreSynVal, 1)
Re: gap junctions, pointers and multithreading
Nothing has been done. If your model has on the order of 10k states or more then you can anticipate that usingI was wondering what the status of this question given the recent release of the 7.0 version into the wild.
threads would give good performance improvements and I would certainly implement the extension sooner
instead of later.
You have the idea. Except for the graph, here is a test that uses it.how much setup does PC need for the parallel_transfer to work?
gap.hoc
Code: Select all
create soma[2]
access soma[0]
forall { diam = 10 L = 100/(PI*diam) insert hh }
objref stim
stim = new IClamp(.5)
stim.amp = 0.5
stim.dur = 0.1
soma[0] { gnabar_hh = 0 gkbar_hh = 0 el_hh = -65 gl_hh = .001 }
{load_file("gap.ses")} // just a graph that plots soma[0and1].v(.5)
objref gap[2]
for i=0, 1 soma[i] {
gap[i] = new HalfGap(.5)
gap[i].r = 100 // MOhm
}
objref pc
pc = new ParallelContext()
for i=0, 1 soma[i] {
j = (i+1)%2
pc.source_var(&v(.5), i)
soma[j] pc.target_var(&gap[j].vgap, i)
}
{pc.setup_transfer()}
run() // runs properly with single thread
{load_file("parcom.hoc")}
ParallelComputeTool[0].nthread(2)
run() // generates error message
Code: Select all
NEURON {
POINT_PROCESS HalfGap
ELECTRODE_CURRENT i
RANGE r, i, vgap
}
PARAMETER {
r = 1e10 (megohm)
}
ASSIGNED {
v (millivolt)
vgap (millivolt)
i (nanoamp)
}
BREAKPOINT {
i = (vgap - v)/r
}
parallel transfer not fully implemented with threads
Re: gap junctions, pointers and multithreading
Haha, mine only has 264... guess I need to work harder then.
Thank you very much for the clarification and code sample.
Thank you very much for the clarification and code sample.
-
- Site Admin
- Posts: 6384
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: gap junctions, pointers and multithreading
You might gain some speedup by specifying cvode.cache_efficient(1)--see http://www.neuron.yale.edu/neuron/stati ... _efficientsl wrote:Haha, mine only has 264... guess I need to work harder then.
Re: gap junctions, pointers and multithreading
I implemented the gap junctions with threads for the fixed and global variable step method and so
they can coexist with multisplit. For the variable step method, one should not place the
gap junction POINT_PROCESS at 0 or 1 of its section. If you expect to use multiple threads, you should use the othewise optional
first arg to pc.target_var to allow the system to figure out which thread owns the target variable. ie. instead of
which I wrote in the code above, use
(using soma[j] to specify the currently accessed section was never
needed)
I did some timing measurements for a 100 and 1000 cell model where a cell is an hh soma and a 4 compartment passive dendrite
in a chain where dendrite(.99) and soma[i+1](.5) have a gap junction between them. Results are
Gap junctions have more thread overhead not only because a value has to make its way from
one cache to another but because each time step has to be divided into two distinct thread jobs.
(whereas without gap junctions a thread job lasts for a minimum NetCon delay which in this case
is the entire simulation).
they can coexist with multisplit. For the variable step method, one should not place the
gap junction POINT_PROCESS at 0 or 1 of its section. If you expect to use multiple threads, you should use the othewise optional
first arg to pc.target_var to allow the system to figure out which thread owns the target variable. ie. instead of
Code: Select all
soma[j] pc.target_var(&gap[j].vgap, i)
Code: Select all
pc.target_var(gap[j], &gap[j].vgap, i)
needed)
I did some timing measurements for a 100 and 1000 cell model where a cell is an hh soma and a 4 compartment passive dendrite
in a chain where dendrite(.99) and soma[i+1](.5) have a gap junction between them. Results are
Code: Select all
100 cells 1000 cells
nthread nogap gap nogap gap
1 0.09 0.1 0.85 1.1
2 0.04 0.09 0.43 0.6
4 0.03 0.08 0.27 0.45
one cache to another but because each time step has to be divided into two distinct thread jobs.
(whereas without gap junctions a thread job lasts for a minimum NetCon delay which in this case
is the entire simulation).
Re: gap junctions, pointers and multithreading
Ah, thank you very much for that. I shall test this in my model soon.
Re: gap junctions, pointers and multithreading
Apologies for the delay in testing this. I compiled the alpha source code (NEURON -- VERSION 7.1 (298+:4d04b5dab7ff+) 2009-03-23) and tried to run my model with it. NEURON crashed with this output:
I was not using multisplit, just 2 threads.
Code: Select all
nrniv: ../nrncvode/occvode.cpp:656: void Cvode::fun_thread_transfer_part2(double*, NrnThread*): Assertion `nrn_nthread == 1' failed.
nrniv: ../nrncvode/occvode.cpp:656: void Cvode::fun_thread_transfer_part2(double*, NrnThread*): Assertion `nrn_nthread == 1' failed.
Aborted
Re: gap junctions, pointers and multithreading
thanks for implementing mutithreaded gapjunctions. it is (or: should be) a step forward to truly parallel gap junctions. However, i encounter different problems when testing my own dummy model with the latest neuron distribution VERSION 7.1 (312:90ceb36c0079).
I'll attach all the code.
celltemplate.hoc
modgap.hoc
With nthread(1) the code works fine. When setting nthread(2), neurons tells me
When starting the code with ntrhread(1) and then manipulating the number of threads through the gui, i get a similar error as the previous posting.
I'll attach all the code.
celltemplate.hoc
Code: Select all
begintemplate tcell
public soma,asyn
create soma
objref asyn
proc init() {
create soma
soma {
L = 25 diam = 20
nseg = 1
insert pas
g_pas = 2e-5
insert hh
asyn = new Exp2Syn(.5)
}
}
endtemplate tcell
Code: Select all
load_file("nrngui.hoc")
load_file("celltemplate.hoc")
cvode.active(1)
tstop = 1000
objref g, b, nil
objref cell1, cell2, s, nc1
objref gaps, gapt // gap source and gap target
s = new NetStim(0.5)
s.interval=50 //ms (mean) time between spikes
s.number=100 //(average) number of spikes
s.start=5 //ms (most likely) start time of first spike
s.noise=0
cell1 = new tcell()
cell2 = new tcell()
nc1= new NetCon(s, cell1.asyn, 0, 0, 0.01)
cell1.soma gaps = new gap(.5)
cell2.soma gapt = new gap(.5)
gaps.r = 3e3
gapt.r = 3e3
objref pc
pc = new ParallelContext()
cell1.soma {
pc.source_var(&v(.5), 0)
pc.target_var(gaps, &gaps.vgap, 0)
}
cell1.soma {
pc.source_var(&v(.5), 1)
pc.target_var(gapt, &gapt.vgap, 1)
}
{pc.setup_transfer()}
{load_file("parcom.hoc")}
ParallelComputeTool[0].nthread(1)
run() // generates error message
.oc>/Applications/NEURON-dev2/nrn/i686/bin/nrniv.app/Contents/MacOS/nrniv: usable mindelay is 0 (or less than dt for fixed step method)
near line 1
{run()}
^
finitialize(-65)
init()
stdinit()
run()
When starting the code with ntrhread(1) and then manipulating the number of threads through the gui, i get a similar error as the previous posting.
I'm not sure what's wrong. The code as provided by Michael seemed to work; even with nthread(2).Assertion failed: (nrn_nthread == 1), function fun_thread_transfer_part2, file ../nrncvode/occvode.cpp, line 656.
./i686/special: line 13: 95526 Abort trap "${NRNIV}" -dll "/work/projects/NEURON/Ach_multi_project/gaptest/i686/.libs/libnrnmech.so" "$@"