Understanding how parallel computing tool works

General issues of interest both for network and
individual cell parallelization.

Moderator: hines

Post Reply
Keivan
Posts: 127
Joined: Sat Apr 22, 2006 4:28 am

Understanding how parallel computing tool works

Post by Keivan » Sun Nov 14, 2010 11:24 pm

I'm trying to understand how parallel computing tool works. I'm reading the paracom.hoc and loadbal.hoc. I need your help to understand the code.

1. What is a backbone segment?
2. What is “global variable dt” and how to activate it? cvode.active()?
Is this true? Or variable dt is global when we use cvode.use_local_dt(0).
-when I tried to use variable dt Variable Step control tool with prallel computing tool there was a crash. is this normal?
3. Does nonuniformities of the membrane mechanisms impact the operation of parallel computing tool? (e.x. M current (some kind of voltage activated potassium channel) locates mostly in the perisomatic region of CA1 pyramidal cells).
4. Do I obtain the latest development code of hoc files located in “*\nrnxx\lib\hoc” with the "hg clone http://www.neuron.yale.edu/hg/neuron/nrn" command?
6. What is the meaning of "sid" in "pc.multisplit(x, sid)"? It means thread or it refers to a section or nod that split in two?
7. If I have a cell with a soma and two dendrites connected to it (one apical and one basal) what is the simplest possible hoc code let me split this cell into two pieces each one processed by a separate thread? What modification the following code needs to work?

Code: Select all

objref pc
pc = new ParallelContext()
pc.nthread(2)
soma {
	pc.multisplit(0,1,2)
	pc.multisplit(1,2,2)
}
pc.multisplit()
8. in the loadbal.hoc srlist object defined 2 times
line 178 -> srlist = new SectionList()
line 188 -> srlist = new List()
9. Neuron can recognize logical CPU cores but can't recognize hyperthread CPU cores.
10. sortindex.reverse Vector class function is not described in the reference manual. its work is obvious. no question. (why this obfunc does not require a parenthesis?)
11.

ted
Site Admin
Posts: 5727
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Understanding how parallel computing tool works

Post by ted » Mon Nov 15, 2010 11:48 am

Here are answers to most of your questions.
Keivan wrote:2. What is “global variable dt”
"variable dt" (adaptive integration) means that NEURON's integrator automatically adjusts the time step and order of integration so that local error (error at each advance of the solution) is less than some user-specified amount. "Global variable dt" means that the same time step is used by each cell in a model. "Local variable dt" means that each cell has its own time step. You may find this article interesting:
Lytton, W. and Hines, M.
Independent variable timestep integration of individual neurons for network simulations.
Neural Computation 17:903-921, 2005.
how to activate it? cvode.active()?
cvode.active() is used to specify whether the integrator uses fixed time step or adaptive integration. cvode.use_local_dt() is used to specify whether adaptive integration uses global dt (all cells use the same dt) or local dt (each cell has its own dt). Details are in the Programmer's Reference.
when I tried to use variable dt Variable Step control tool with prallel computing tool there was a crash. is this normal?
No.
Does nonuniformities of the membrane mechanisms impact the operation of parallel computing tool? (e.x. M current (some kind of voltage activated potassium channel) locates mostly in the perisomatic region of CA1 pyramidal cells).
No.
Do I obtain the latest development code of hoc files located in “*\nrnxx\lib\hoc” with the "hg clone http://www.neuron.yale.edu/hg/neuron/nrn" command?
That will clone NEURON's mercurial repository, which will allow you to recreate any version of source code for NEURON since the repository was first set up. If you just want to compile the most recent development code, follow the instructions at http://www.neuron.yale.edu/neuron/download/getdevel
What is the meaning of "sid" in "pc.multisplit(x, sid)"? It means thread or it refers to a section or nod that split in two?
sid is an abbreviation for "split id". Splitting a cell means cutting it into pieces at one or more nodes,, which are called "split nodes". The pieces are then simulated as subsystems that are coupled only at the split nodes. See the documentation of the ParallelTransfer class in the Programmer's Reference, and also
Hines, M.L., Markram, H. and Schuermann, F.
Fully implicit parallel simulation of single neurons.
Journal of Computational Neuroscience 25:439-448, 2008.
8. in the loadbal.hoc srlist object defined 2 times
line 178 -> srlist = new SectionList()
line 188 -> srlist = new List()
What is the question?
9. Neuron can recognize logical CPU cores but can't recognize hyperthread CPU cores.
Your use of the term "logical CPU cores" is a misnomer. Those are actual physical processors. There is no such physical entity as a "hyperthread CPU core". Hyperthreading is supposed to help speed up "context switching" which may be important for people who use multiple GUI-intensive applications at the same time. However, it has no real benefit for number crunching applications like NEURON. Its greatest use may be for chip makers who concoct tests that allow them to claim that their CPUs are faster than their competitors' CPUs.
10. sortindex.reverse Vector class function is not described in the reference manual.
because it isn't a "Vector class function". It is normal hoc syntax that specifies the sequential execution of two Vector class methods, sortindex and reverse, which are documented. For other examples of this kind of syntax, see the documentation of the Vector class in the Programmer's Reference.
(why this obfunc does not require a parenthesis?)
Optional arguments may be omitted; if the arguments are omitted, the parens may also be omitted.

hines
Site Admin
Posts: 1595
Joined: Wed May 18, 2005 3:32 pm

Re: Understanding how parallel computing tool works

Post by hines » Mon Nov 15, 2010 12:39 pm

1) See fig 1 and 2 of http://www.neuron.yale.edu/neuron/stati ... isplit.pdf
The major constraint in splitting cells is that no subtree has more than two connection points
to other subtrees. If a subtree has two connection points, then the path between them on that
subtree is the backbone.

2) ...crash. is this normal?
There is a bug somewhere. If you send me the code that crashes I'll take a look at it. < michael dot hines at yale dot edu >

3) nonuniformities (in the sense of existence vs non-existence of mechanisms) affects the decisions on splitting since the goal is
to have the total number of states on each thread be as similar as possible.

7) The equivalent of
connect apical(0), soma(0)
connect basal(0), soma(1)
is (the soma is a backbone since it has two connection points to other subtrees)
(I've numbered the connection points 15 and 24 though 0 and 1 would do as well
soma pc.multisplit(0,15)
soma pc.multisplit(1,24)
apical pc.multisplit(0,15)
basal.multisplit(0,24)
pc.multisplit()

8) in cpu_complexity, it was convenient for the srlist (section root list) to be a SectionList and that is only used in the body
of the function.
In cplx_helper, it was convenient for srlist to be a list of SectionRef objects.
It would have been better coding style for the cpu_complexity srlist to be a different name and to be declared locally in an objref.

Keivan
Posts: 127
Joined: Sat Apr 22, 2006 4:28 am

Re: Understanding how parallel computing tool works

Post by Keivan » Wed Nov 17, 2010 2:43 am

Thank you hines. thank you ted.
1.
Do I obtain the latest development code of hoc files located in “*\nrnxx\lib\hoc” with the "hg clone http://www.neuron.yale.edu/hg/neuron/nrn" command?
I've asked this question because I'm tracking neuron changes for a while using this feed http://www.neuron.yale.edu/hg/neuron/nrn/rss-log.
I've noticed there was no change in the codes located in \nrnxx\lib\hoc in this period. I've suspected that maybe I'm following the wrong feed. if this is the right feed, is it mean that there was no change in the codes located in \nrnxx\lib\hoc for at least 1 year ago?

2. Probably I should not ask this question here. If it is nonsense don't answer it.
what is triangularization? could you please refer mo to a book or article I can learn more about this. [plz consider I'm a medical doctor]

3.
the soma is a backbone since it has two connection points to other subtrees
In this example soma is the most important section. does selection of an important section as a backbone section impact on the accuracy of calculations in that section?

4.I've tested your proposed code but It's not working yet.

Code: Select all

create soma,apical,basal
connect soma(1), apical(0)
connect soma(0), basal(0)
forall {insert hh 	insert pas	nseg = 7}
soma.nseg = 1
objref pc
pc = new ParallelContext()
pc.nthread(2)
soma pc.multisplit(0,15)
soma pc.multisplit(1,23)
apical pc.multisplit(0,23)
basal pc.multisplit(0,15)
pc.multisplit()

Code: Select all

nrniv: two sid = 15 at same point on tree rooted at basal
5. About the crash. It happens time to time when I activate parallel computing. but actually it is related to the multirun fitter tool. this is the error code.

Code: Select all

nrniv: Pointer points to freed address: dendA5_011111111111111.v(0.5)
 near line 43
 {prun()}
         ^
FitnessGenerator[0].efun(        )
ParmFitness[0].efun(3...      , )
MulfitPraxWrap[0].praxis_efun(3...    , )
MulfitPraxWrap[0].prun(  )
and others
initcode failed with 2 left
it's an occasional error. It would not be important until I can trust the results when the code working and there is no crash.

6. what is the meaning of this command:

Code: Select all

execute1("{all}", $o1, 0)
actually I don't understand the meaning of "{all}" here.
also I don't understand the meaning of this:
Parse and execute the command in the context of the object.

Keivan
Posts: 127
Joined: Sat Apr 22, 2006 4:28 am

Re: Understanding how parallel computing tool works

Post by Keivan » Thu Nov 18, 2010 4:51 am

I want to confirm that the bug is fixed and there is no crash.
I know I'm asking a lot more than usual. Understanding these codes is like an advanced hoc programming course to me. may I ask you (beg you) to answer my future questions in this topic (as much as you can, as much as you have time for this).

Keivan
Posts: 127
Joined: Sat Apr 22, 2006 4:28 am

Re: Understanding how parallel computing tool works

Post by Keivan » Mon Nov 22, 2010 5:50 am

Dear Hines
Today I've realize that your fix has a very negative impact on the multirun fitter operation. Activating parallel computing tool (multisplit + busy waiting + variable dt) cause multirun fitter cannot find the results. if you want I can send you my new model shows this (please inform me).

hines
Site Admin
Posts: 1595
Joined: Wed May 18, 2005 3:32 pm

Re: Understanding how parallel computing tool works

Post by hines » Mon Nov 22, 2010 3:14 pm

In regard to Keivan » Wed Nov 17, 2010 3:43 am
1). Yes. But I'm not familiar with the rss feed. I didn't know it existed. Perhaps someone can write a few lines
about what it is supposed to do and how to use it.
2). One of the major computations involved in integrating the equations is to solve the matrix equation G*V = I
where G is sparse matrix which is very similar to a tridiagonal matrix and V and I are vectors.. NEURON solves this
via direct guassian elimination which happens in two phases: triangularization followed by back substitution. Any book
about numerical methods will present an explanation of what is going on.
3). One does not select sections on the basis of a desired backbone. Backbones are accidents of splitting cells into pieces.
The main criterion for splitting is to get enough pieces to allow for reasonably even load balance on processors.
Spltting does not affect accuracy but it does affect compuational efficiency.
4). My intention was to draw an analogy between "connect" and "pc.multisplit". The former is limited to a single processor
and the latter generalizes to multiple threads and cluster computing. For threads one builds the entire cell using "connect"
and then disconnects certain points which are then connected again by pc.multisplit. The reason for the error message due to
soma pc.multisplit(0,15) and basal pc.multisplit(0,15) is because soma(0) and basal(0) are the same point because of an earlier
connect soma(0), basal(0). In the example, there is no need to execute any of the "connect" statements.
5). As I mentioned by direct email, thanks for pointing out the bug and it is fixed at
http://www.neuron.yale.edu/hg/neuron/nr ... 6e4dd9ff95
6). The statement does not seem to be doing anything substantive. However, a similar statement occurs in lib/hoc/loadbal.hoc

Code: Select all

if (!execute1("{all}", $o1, 0)) {
which executes the statement {all()} with error messages turned off and returns 1 if the statement executes successfully and 0
if it fails. it basically tells me if a variable called "all" exists in the context of the $o1 arg.

In regard to Keivan » Thu Nov 18, 2010 5:51 am
I thought the bug was fixed by
http://www.neuron.yale.edu/hg/neuron/nr ... 6e4dd9ff95
but you say in Keivan » Mon Nov 22, 2010 6:50 am
...your fix has a very negative impact on the multirun fitter operation
Let's try again by email. Send your model to me and I'll take a look. The combination of mulrunfitter, threads, busy waiting, and variable step method
has not had extensive use so there may well be a problem. I'd first turn off busy waiting as I've never encountered a situation where it has
better performance and it cannot possibly work unless the number of threads is matched to the number of processors.

Post Reply