one day later i can tell the problem is caused by the netcon.
nc1= new NetCon(s, cell1.asyn, 0, 0, 0.01) works fine with one thread but doesn't work with n threads.
nc1= new NetCon(s, cell1.asyn, 0, 0.1, 0.01) works with n threads.
But what if i want delay=0???
gap junctions, pointers and multithreading
Moderator: hines
-
- Site Admin
- Posts: 6384
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: gap junctions, pointers and multithreading
The problem is not caused by NetCon. It is a consequence of the fact that zero delay is possible only within a thread, not between threads. If you want zero delay, keep source and target on the same thread.btorb wrote:one day later i can tell the problem is caused by the netcon.
. . .
But what if i want delay=0???
Re: gap junctions, pointers and multithreading
Unfortunately, more difficulties with the gap junctions in combination with ParallelNetwork. I'm building a network that runs on multiple nodes using ParallelContext and ParallelNetwork. Based on the code from the previous post, I implemented a gap junction. It works for a single gap junction between 2 cells on 2 different nodes. (mpiexec -np 2 special -mpi mycode.hoc). Now I built it into a larger network (still only having 3x3 interneurons for testing purposes) but it seems that neurons hangs after the { pc.setup_transfer() }. The processors are occupied for 100% but nothing obvious is happening. (It ran for 30 minutes on 4 cores before i killed the process. Normally, the network consisting of 25+9 neurons takes a 'splitsecond' to finish).
According to the documentation one should call pc.setup_transfer after every call to either "pc.source_var" or "pc_target_var". However, when following this advice neuron seems to hang. When i only call pc.setup_transfer() once after all variables are set up, the gap junctions don't work...
UPDATE: i guess my problem boils down to two issues:
1) Is it possible (syntactically: yes) to execute the following code:
This code is interpreted fine (i.e., no errors but the result is not as desired because i cannot see any influence of the gap junction). This code is part of more code, and in a first pass, i determine which neurons are connected to which other neurons. The GIDs are stored in 3 vectors. Now my real question: Is that possible to use a localobj, or do i need to store all the gapjunctions in an array?
2) neuron seems to hang when pc.setup_transfer is called often. side-effect of the first issue?
Help?
Ben
According to the documentation one should call pc.setup_transfer after every call to either "pc.source_var" or "pc_target_var". However, when following this advice neuron seems to hang. When i only call pc.setup_transfer() once after all variables are set up, the gap junctions don't work...
UPDATE: i guess my problem boils down to two issues:
1) Is it possible (syntactically: yes) to execute the following code:
Code: Select all
proc gapSecondPass() { local i, src,gid, tgid, ggid localobj tcell, gapt
for i=0, allGapTrg.size()-1 {
srcgid = allGapSrc.x[i]
tgid = allGapTrg.x[i]
ggid = allGapGid.x[i]
if(pc.gid_exists(tgid)) {
tcell = pc.gid2cell(tgid)
// make new gap (aka: electrical coupling in reverse order)
ggid = allGapGid.x[i] +1 // gap GID ending with 1 is the way back
// connect da gap (in direction SRC -> TRG)
tcell.soma gapt = new HalfGap(0.5)
gapt.r = gapR
pc.target_var(gapt, &gapt.vgap, allGapGid.x[i]) // connect to original SRC gap
tcell.soma pc.source_var(&v(0.5), ggid)
{ pc.setup_transfer() }
}
}
for rank=0, pc.nhost-1 {
{ pc.setup_transfer() }
}
}
This code is interpreted fine (i.e., no errors but the result is not as desired because i cannot see any influence of the gap junction). This code is part of more code, and in a first pass, i determine which neurons are connected to which other neurons. The GIDs are stored in 3 vectors. Now my real question: Is that possible to use a localobj, or do i need to store all the gapjunctions in an array?
2) neuron seems to hang when pc.setup_transfer is called often. side-effect of the first issue?
Help?
Ben
Re: gap junctions, pointers and multithreading
It is wasteful to call ParallelContex.setup_transfer more than once. FromAccording to the documentation one should call pc.setup_transfer after every call to either "pc.source_var" or "pc_target_var".
http://www.neuron.yale.edu/neuron/stati ... p_transfer
But it should not matter if it is called more than once.This method must be called after all the calls to source_var and target_var and before initializing the simulation.
The ParallelTransfer methods of ParallelContext updated to work with threads since the end of Jan 2009. If your version
is later than that, send me a zip file with all the hoc,ses,mod files necessary to exhibit the problem and I'll look into it.
(michael.hines@yale.edu)
Re: gap junctions, pointers and multithreading
I looked at your code fragment for gapSecondPass a bit more closely and cannot see where you
are storing the
... gapt = new HalfGap(0.5)
None of them will exist when the gapSecondPass returns.
are storing the
... gapt = new HalfGap(0.5)
None of them will exist when the gapSecondPass returns.