Saving vector at each time step - not associated with cell

General issues of interest both for network and
individual cell parallelization.

Moderator: hines

sgratiy
Posts: 41
Joined: Tue Mar 30, 2010 5:33 pm

Saving vector at each time step - not associated with cell

Post by sgratiy »

My model is run in parallel using python as interpreter.
I need to save a vector of CSDs at every time step which results from summign currents for entire population of cells. So this vector is not a range variable and it is not associated with any particular cell. For serial implentation I used to modify the following procedure to update my vector of CSDs

Code: Select all

proc advance() { localobj tobj
  fadvance()
  update_csd()
  tobj = csdvec.c 	// prevent future changes to csdvec from affecting contents of Vectors that are already in dlist
  dlist.append(tobj)
}
but in parallel simulation, as far as I understand, this will not work as simulation control does not have proc advance(). It seems that using

Code: Select all

vdest.record(&var)
also is not going to work because the description of the method states:
For the local variable timestep method, CVode.use_local_dt() and/or multiple threads, ParallelContext.nthread() , it is often helpful to provide specific information about which cell the var pointer is associated with by inserting as the first arg some POINT_PROCESS object which is located on the cell. This is necessary if the pointer is not a RANGE variable and is much more efficient if it is. The fixed step and global variable time step method do not need or use this information for the local step method but will use it for multiple threads. It is therefore a good idea to supply it if possible.
How can I save my vector which is not associatted with any particular cel? Do I really need to define a point process to save data?
ted
Site Admin
Posts: 6289
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Saving vector at each time step - not associated with ce

Post by ted »

First let me make sure I know what you want to do.
sgratiy wrote:I need to save a vector of CSDs at every time step which results from summign currents for entire population of cells.
Presumably you are calculating current source density by a procedure roughly equivalent to this:
define a set of J volumes vol_j
within the jth volume there are K_j neurites
the kth neurite in volume j generates membrane current i_jk
the current source density in volume j is

Code: Select all

CSD_j = (SUMMA i_jk)/vol_j
            k
In the course of a simulation, each i_jk varies with time so you need to calculate CSD_j after each fadvance(), and so you will want to capture the time course of CSD_j to a vector.

To handle all compartments, you will need J CSD vectors--one for each volume--if your simulation is executed serially. However, if your model is parallelized with MPI, the cells will be distributed over nhost processors. This means that each host will have to capture its own set of J CSD vectors in the course of a simulation. The total number of CSD vectors for the entire model will be J*nhost. After the simulation ends, you will have to "reduce" these to a single set of J CSD vectors (the question of how to do this will be addressed in a subsequent post).

Have I guessed correctly?
For serial implentation I used to modify the following procedure to update my vector of CSDs
I'd do something similar except I'd get rid of localobj tobj and do

Code: Select all

proc advance() {
  fadvance()
  update_csd()
  dlist.append{csdvec.c)
}
or maybe write update_csd as an obfunc that returns the CSD vector and change its name to new_csd(), so proc advance becomes

Code: Select all

proc advance() {
  fadvance()
  dlist.append{new_csd())
}
but in parallel simulation, as far as I understand, this will not work as simulation control does not have proc advance().
No, proc advance() still exists and can be used to insert calculations that are to be performed after the model's states have been updated. However, on each host you can only add up the currents that are generated by the neurites that exist on that host. This means you must use gids to identify the cells, and the procedure that calculates the CSD must use
if (pc.gid_exists(gid))
statements to make sure that each host adds up only the currents that are produced by neurites that are actually on that host.
For the local variable timestep method . . .
Why bring this up? Are you using local variable time step integration, or fixed time steps (which are inherently global)? If you're using fixed time steps, the caveats about local variable timestep are not relevant. If you are using local variable time step, my first suggestion is stop--use fixed time steps. Local variable time steps will make data reduction inordinately complex, and for what possible benefit?

Two further comments about Vector.record before I give you the chance to reply--
Vector.record can capture the time course of any scalar variable, not just a range variable or some other variable that "belongs" to a section. There is only one small catch: if you try to capture the time course of t itself, e.g. with
objref tvec
tvec = new Vector()
tvec.record(&t)
at least one section must exist before the tvec.record(&t) statement is executed; if not, hoc will produce a "Section access unspecified" error message when it encounters the tvec.record(&t) statement. To make this error message go away, it is sufficient to create a "dummy" section like so
create foo
and then you can execute tvec.record(&t) with impunity. And it will work correctly, too.
sgratiy
Posts: 41
Joined: Tue Mar 30, 2010 5:33 pm

Re: Saving vector at each time step - not associated with ce

Post by sgratiy »

Ted, your understanding is precisely correct about my plan for calculating the CSD. Althought I assume some symmetry about the cell distribution, so my CSD will vary along the depth of the cortex only.

I am glad to know that the proc advanace() does exist and can be used. Looking forward to learning on how to do that on a parallel NEURON. I guess, I need to understand better the simulation control for the parallel run. The code I am modifying is using h.prun() to run the code, but I could not find anywhere the description of this function.
Vector.record can capture the time course of any scalar variable, not just a range variable or some other variable that "belongs" to a section.
Unfortunately I do not know how to use it correctly. For instance, the following member function gives an error (I run code serially at this point):

Code: Select all

    def calc_csd(self):
        
        for sec in self.cell[0].dendritic:
            for jseg,seg in enumerate(sec):    
                area_seg= h.area(seg.x) # area of the segment
                self.i_mem_seg[jseg] = sec(seg.x).i_membrane*area_seg # get membrane current for the segment
               
        csd = self.W*self.i_mem_seg # compute vector which will be proportional to the CSDs

        self.csdreclist=h.List()
        
        for jzpos in range(csd.shape[0]):
            csdrec = h.Vector()
            csdrec.record(_ref_csd[jzpos])
            self.csdreclist.append(csdrec)
Error message:

Code: Select all

  File "population.py", line 77, in calc_csd
    csdrec.record(_ref_csd[jzpos])
NameError: global name '_ref_csd' is not defined
if instead I pass the element of csd vector directly as follows

Code: Select all

csdrec.record(csd[jzpos])
then I get:

Code: Select all

nrniv: Optional first arg is not a POINT_PROCESS
 near line 0
 {apic[54] { pt3dadd(9.7, -185.4, 0.55, 0.07) }}
                                                ^
        Vector[0].record(...)
oc_restore_code tobj_count=1 should be 0
Traceback (most recent call last):
  File "test.py", line 81, in <module>
    l2pc.calc_csd()  # calculate CSD
  File "population.py", line 77, in calc_csd
    csdrec.record(csd[jzpos])
RuntimeError: hoc error
I quoted the paragrapth which includes the discussion of CVode.use_local_dt() because it talks about needing a POINT_PROCESS object when wanting to save a non RANGE variables with vdest.record on multiple thread. However you say that it is not necessary. So I am very confused.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Saving vector at each time step - not associated with ce

Post by hines »

If csd is a HOC Vector then instead of

Code: Select all

csdrec.record(_ref_csd[jzpos])
the proper syntax is

Code: Select all

csdrec.record(csd._ref_x[jzpos])
Since Python is being used, I would modify
Vector.record can capture the time course of any scalar variable...
to "Vector.record can capture the time course of any HOC scalar variable...
(and also remark that Vector.x is a HOC scalar.)

Thus (assuming a nrngui -python launch

Code: Select all

from neuron import h
h('a=4')
av = h.Vector()
av.record(h._ref_a)
h.run() #default h.dt=0.025 and h.tstop=5
av.printf()
printfs 201 numbers with the value of 4 .
Similarly, a relaunch with

Code: Select all

from neuron import h
av = h.Vector(10)
av.indgen().add(10)
bv = h.Vector()
bv.record(av._ref_x[2])
h.run()
bv.printf()
printf 201 numbers with the value of 12 .
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Saving vector at each time step - not associated with ce

Post by hines »

A few comments about 'def calc_csd(self):'
1)

Code: Select all

sec(seg.x).i_membrane
can be written

Code: Select all

seg.i_membrane
2) csd is a local variable in calc_csd and so it will be destroyed on exit from the function. When this happens all the csdrec vectors will be removed from the internal list of vectors to be recorded.
Of course, as my previous forum rsponse noted, Vector.record cannot accept a Python scalar anyway.

3) The overall style for Vector.record usage is to set them up prior to simulation and then they do their work of storing the scalar every time step during a simulation.
calc_csd seems to be doing both the calculation and the setup. Vector.record cannot be called during a simulation but only prior to (or during the beginning of) finitialize(). Instead of Vector.record, you need to save the values of csd yourself using Python (assuming calc_csd is called every time step).

Actually, I do not recommend modifying proc advance to do this calculation. It closes off the possiblity of using variable time steps and is not very efficient for such a massive calculation.
(but variable time steps is a bit of an irrelevancy here since the combination of extracellular and parallel can work only for the fixed step method. This is because extracellular with variable steps
requires the DAE solver and this is not available in NEURON for parallel. In this case combining DAE and LVARDT (local variable time step method) could prove useful since the calculation of
i_membrane is so much more accurate than with the implicit fixed step method)
Also, for parallel simulations, ParallelContext.psolve() is the recommended method.

Instead I would recommend doing the basic calc_csd using a mod file, eg. SUFFIX csd1
with POINTER im, csd
and a
AFTER SOLVE {
csd = csd + im*area
}
im would be made to point to i_membrane at that location and csd would be made to point to the voxel_csd value the location is in.
It is even thread safe if locations tthat contribute to the same voxel_csd value are all in one thread.
A
POINT_PROCESS csd2
RANGE voxel_csd
BEFORE BREAKPOINT {
voxel_csd = 0
}
AFTER_SOLVE {
: ... append the current voxel_csd to a Vector ...
}
with one instance per voxel per thread
Note that the ascii order of the names means that csd1 is done before csd2 in this case.
Although, I've left a lot of details out, the idea, I believe, is sound. In a parallel simulation, if individual voxels are distributed,
ParallelContext.allreduce(Vector, 1) will be useful
sgratiy
Posts: 41
Joined: Tue Mar 30, 2010 5:33 pm

Re: Saving vector at each time step - not associated with ce

Post by sgratiy »

Thanks for the detailed response. However, it is still unclear to me whether I must use the mod file to save my csd without modifying the proc advance() or I can get away withot inroducing a mod file.( I am reluctant to use mod file because it will be cell mechanism, whereas CSD is not a cell mechanism, but a property of a population of cells.)
My understanding was that it is possible as long as I record HOC scalars. I took into account your comments and modified my class such that Vector.record will be setup before the simuation. The code compiles but I did not run it in parallel yet. I wanted to get your feedback first if possible.
I created the member function to set up the data recording:

Code: Select all

    def data2record(self):  # specify which data will be saved during the simulation

        self.csd = h.Vector(self.nzpos)
        self.csdlistrec = h.List()    # list of t-vectors with csd[jz](t_i)
        
        print "data2record..."
        for jz in range(self.nzpos): 
            self.csdjzrec = h.Vector()
            self.csdjzrec.record(self.csd._ref_x[jz])
            self.csdlistrec.append(self.csdjzrec)
and then another member function to perform the CSD calculations wich will also be called only once before finitialize():

Code: Select all

    def calc_csd(self):
        for sec in self.cell[0].dendritic:  # for now use only a single cell for CSD calculations, later where will be multiple cells
            for jseg,seg in enumerate(sec):    
                self.i_mem_seg[jseg] = seg.i_membrane*h.area(seg.x) # get membrane current for the segment
               
        self.csd = h.Vector(self.W*self.i_mem_seg) #  both self.W and self.i_mem_seg are numpy matrixes declared using 'matrix' type numpy package
Since self.csd is now a class member it will not be destroyed on exit from calc_csd function and since it is an h.Vector Vector.record should work, right? I am not sure however whether it is acceptable to convert numpy matrix self.W*self.i_mem_seg into to a Hoc Vector:

Code: Select all

        self.csd = h.Vector(self.W*self.i_mem_seg) #  both W and i_mem_seg are numpy matrixes declared using 'matrix' type in numpy package
Does this work only for numpy arrays? Or should I use from_python member function?
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Saving vector at each time step - not associated with ce

Post by hines »

I think your concept is sound. It just has a single bug in implementation that I see. That is, during calc_csd, csd has to remain the same HOC Vector (in terms of memory storage location) as when
it was created by data2record but

Code: Select all

self.csd = h.Vector(self.W*self.i_mem_seg)
every time step destroys the old Vector instance and creates a new Vector instance. Thus the first time calc_csd is called, all the csdlistrec Vector instances
will be removed from the record list because all the
self.csd._ref_x[jz] are freed. A way to avoid this probem is to keep copying the values of self.W*self.i_mem_seg into self.csd without destroying and
(re-)creating self.csd. i.e self.csd.from_python(self.W*self.i_mem_seg)
I am assuming W and i_mem_seg are numpy arrays
http://www.neuron.yale.edu/neuron/stati ... rom_python.

One trivial thing...

Code: Select all

self.csdjzrec = h.Vector()
only needs to be local in its loop as it gets saved in csdlistrec. So it does not need a 'self.' prefix.

The mod file I suggested was just a helper that avoided interpreter overhead in accumulating the values for the csd calculation (eliminated the need for calc_csd). Forget that suggestion and
keep on with your calc_csd and modified proc advance().

I now realize the obvious. Many people are inserting the extracellular mechanism for the sole purpose of computing i_membrane (for LFP or CSD) and taking a big hit in performance as well as not being able to use lvardt or the global variable step method in parallel or with threads. I'm going to create h.nrn_i_membrane(nrn.segment) and h.nrn_i_axial(nrn.segment) which will be available for
all integration methods. The returned units will be nA and work properly for section locations 0 and 1. nrn_i_axial will of course be the axial current between the segment and its parent segment regardless
of section orientation (least surprising if the 0 end is toward the root). Thus i_axial of the root segment is always 0. (the above will also be available in HOC via its standard syntax just like area or ri)
sgratiy
Posts: 41
Joined: Tue Mar 30, 2010 5:33 pm

Re: Saving vector at each time step - not associated with ce

Post by sgratiy »

Mike, thanks for pointing out the bug. I am glad I am on the right track, however, I got confused with your statement:
Forget that suggestion and keep on with your calc_csd and modified proc advance().

I thougth that I do not need to modify the proc_advance() to store the csd vector since I can use Vector.record. The only reason I modified the proc advance() in the past (for serial implementation) was to store the csd vector.

On i_membrane: having access to i_membrane in parallel with variable time step will be extremely useful for me and my collegues. I am looking forward to this.

Finally, the parallel code I modify uses .prun(t.stop). while you are recommend using ParallelContext.psolve(). What is the difference between .prun() and .psolve()? I could not find any documentation on .prun()

Thanks
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Saving vector at each time step - not associated with ce

Post by hines »

The Vector.record system is handling the saving of the values of self.csd.x[jz] but those are just constants without a call to calc_csd every time step in order to assign values to
self.csd.x[jz] via the statements

Code: Select all

      self.i_mem_seg[jseg] = seg.i_membrane*h.area(seg.x)
self.csd = h.Vector(self.W*self.i_mem_seg)
I now realize the obvious...
Obviously useful. Not so obvious how to do it efficiently in general since I forgot to mention the rare contribution of ELECTRODE_CURRENT and LinearMechanism instances.
But I will look into it. Whether doing it by brute force on each request or creating a structure during setup time to make it more efficient, I'm not sure. There is also a tradeoff between the
function approach of calculating at only the locations requested and storing for all segments in an internal array.

ParallelNetManager.prun is implemented in
http://www.neuron.yale.edu/hg/neuron/nr ... parmpi.hoc line 214 . You can see it is basically wrapper for

Code: Select all

pc.set_maxstep(10)
stdinit()
pc.psolve(tstop)
Also see the long description in http://www.neuron.yale.edu/neuron/stati ... arnet.html (specifcialy point 7) and
http://www.neuron.yale.edu/neuron/stati ... nager.prun

ParallelNetManager was my first (and las) attempt at wrapping the network simulation capabilities of ParallelContext into a hopefully easier to understand and use "user level" class.
I'm no longer very happy with it as it obscures the important issue of efficient parallel setup and load balance. There has been a lot of experience gained over the past 8 years. What is needed is
a complete tutorial for all the idioms that have been developed and used in parallel simulations. Just listing them and pointing to them in ModelDB would be a good start.
There is going to be a three day NEURON course in parallelization in San Diego in the latter part of June. http://www.neuron.yale.edu/neuron/stati ... d2014.html
and the tutorial notes will be ready for that.


Personally, I don't use ParallelNetManager anymore as its
sgratiy
Posts: 41
Joined: Tue Mar 30, 2010 5:33 pm

Re: Saving vector at each time step - not associated with ce

Post by sgratiy »

Of course, I still need to call calc_csd, thanks for pointing that out. I did this in my serial implementation in the past, but got confused that with Vector.record it is not necessary because calling the member function csd_calc() once in the constuctor would reference

Code: Select all

self.csd
to

Code: Select all

self.W*self.i_mem_seg,
but I see now that it will not.

I looked at the links your suggested about prun, but it does not call proc advance() (unless it is called inside psolve()). Where does proc advance() hide?

In serial I would do:

Code: Select all

proc advance() {
  fadvance()
  mypopulation.calc_csd()
}
Would this work in parallel as well?

Now, since I need to call calc_csd() every time step, do I have any benefit of recording data into a hoc vector? I may just as well return csdcvec from calc_csd() on everytime step and save it into a list as follows:

Code: Select all

proc advance() { localobj tobj
  fadvance()
  csdvec = mypopulation.calc_csd() 	
  csdhist.append(csdvec)
}
As I understand this implementation limits me to a constant time step. But are there any other limitations? Do you still recommend me following this route?

Finally, another question: At which point would I need to call pc.allreduce()?
Thanks
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Saving vector at each time step - not associated with ce

Post by hines »

I looked at the links your suggested about prun, but it does not call proc advance() (unless it is called inside psolve()). Where does proc advance() hide?
nrn/lib/hoc/stdrun.hoc pc.psolve does not call proc advance
Would this work in parallel as well?
I think so. try it out (but don't forget to call pc.set_maxstep(10) before calling run() )
since I need to call calc_csd() every time step, do I have any benefit of recording data into a hoc vector?
Nope. (as long as csdvec is a new instance on every time step). However, given your previous problems, I wonder if the distinction between python and hoc is going to give you trouble. ie.
how did you get mypopulation, a hoc objref, to reference the Python instance of a class containing the calc_csd method. Generally, people who work in the python world don't bother to have HOC
access Python but do all their programming in python and access NEURON via
from neuron import h
as needed.

Remember, you have only been showing me incomplete fragments of code and I am having to guess about what is the (ambiguous) context. For example the line

Code: Select all

  mypopulation.calc_csd()
I'm guessing ought to be somthing like

Code: Select all

  po. mypopulation.calc_csd()
where

Code: Select all

objref po
po = new PythonObject() // access to module "__main__" of the python world.
and in python

Code: Select all

mypopulation = MyPopulation(...) #where MyPopulation is the python class defining the python calc_csd method
As I understand this implementation limits me to a constant time step.
Yes
But are there any other limitations?
No
Do you still recommend me following this route?
I think you might be close enough to the end, that it can work without too much more effort.
Finally, another question: At which point would I need to call pc.allreduce()?
It is not entirely clear that you need to add up the csdvec. The question I have is what is the relationship between all your csdvec that you have preserved in all the processes and what do you want to do with them all. Allreduce is useful only if you want to add vectors together. pc.py_alltoall is the workhorse for everyone sending specfic numpy arrays to specific ranks for summation on the target
ranks.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Saving vector at each time step - not associated with ce

Post by hines »

I was thinking about how to get a python callable executed at each time step without using proc advance and remembered the recent invention of the nonvintblock python module that interfaces
rxd to the internal integrators. That wasn't quite what was needed but here is a way to do this with a helper mod file that provides the magic. Before showing the
mod file, here is the test program that exercises it.

Code: Select all

$ cat test.py
from neuron import h
h.load_file("nrngui.hoc")
pc = h.ParallelContext()
nhost = int(pc.nhost())
rank = int(pc.id())

def foo(): #to be called at every time step
  print '%d foo: t=%g' % (rank, pc.t(0))

s = h.Section()
a = h.PyBeforeStep(s(0.5))
a.set_callback(foo)

h.tstop=.1
print 'fixed step fadvance'
h.run()

print 'fixed step pc.psolve'
pc.set_maxstep(10)
h.stdinit()
pc.psolve(h.tstop)

h.cvode_active(1)
print 'var step fadvance'
h.run()

When executed, it will produce:

Code: Select all

$ nrniv -python test.py
NEURON -- VERSION 7.4 (1096:294dac40175f) 2014-05-19
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2014
See http://www.neuron.yale.edu/neuron/credits

loading membrane mechanisms from x86_64/.libs/libnrnmech.so
Additional mechanisms from files
 pyas.mod
fixed step fadvance
0 foo: t=0
0 foo: t=0.025
0 foo: t=0.05
0 foo: t=0.075
0 foo: t=0.1
fixed step pc.psolve
0 foo: t=0
0 foo: t=0.025
0 foo: t=0.05
0 foo: t=0.075
0 foo: t=0.1
var step fadvance
0 foo: t=0
0 foo: t=0.1
>>> 
Now here is the needed mod file.

Code: Select all

$ cat pybs.mod
NEURON {
  POINT_PROCESS PyBeforeStep
  POINTER callback
}

ASSIGNED {callback}

VERBATIM
extern int (*nrnpy_hoccommand_exec)(Object*);
ENDVERBATIM

BEFORE STEP {
VERBATIM
  if (nrnpy_hoccommand_exec && _p_callback) {
    Object** po = (Object**)(&(_p_callback));
    (*nrnpy_hoccommand_exec)(*po);
  }
ENDVERBATIM
}

PROCEDURE set_callback() {
VERBATIM
  Object** po = (Object**)(&(_p_callback));
  Object* old = *po;
  *po = (Object*)0;
  if (ifarg(1)) {
    *po = *hoc_objgetarg(1);
    hoc_obj_ref(*po);
  }
  if (old) { hoc_obj_unref(old); }
ENDVERBATIM
}
Two major remarks about this.

Why am I useing BEFORE STEP instead of AFTER SOLVE?
Because variable time integration steps can bounce around and completed time steps can be
abandoned or interpolated to event times. BEFORE STEP will ensure that time will not decrease during the next integration or event handling. In fact, the BEFORE STEP handling is immediately
computationally prior to Vector.record handlng.

Why is foo printing pc.t(0) instead of h.t for the time?
Because foo is being called from an internal location prior to update of h.t from a thread that, in principle, is quite different from the current time of other threads.
In fact, this mod file is THREADSAFE but I didn't mark it as such since the thread id is not being passed to the python callback (the implementation of foo by using pc.t(0) is explicitly assuming that
the thread is thread 0.

A few minor remarks.

The pybs.mod file assumes the 7.3-ansi or 7.4 versions in the sense that many of the prototypes of the functions used by pybs.mod in the VERBATIM blocks are automatically included. e.g. see nrn/src/oc/oc_ansi.h . If you wish to use pybs.mod with an earlier version of NEURON, explicilty declare

Code: Select all

extern Object** hoc_objgetarg(int);
extern int ifarg(int);
extern void hoc_obj_ref(Object*);
extern void hoc_obj_unref(Object*);
I don't check, but the argument to set_callback must be a python object (usually a callable, rarely, a tuple like (callable, arg1) or (callable, (arg1, arg2, ...)) .The latter two are not very useful in
this context.

The idiom for storing a PythonObject* in what is normally a double* is commonly used also for Vector objects.

If one stores an object whose reference count might go to 0, it is critical to reference it and then, when no longer needed, unreference it.
sgratiy
Posts: 41
Joined: Tue Mar 30, 2010 5:33 pm

Re: Saving vector at each time step - not associated with ce

Post by sgratiy »

Thanks for the detailed response. Certainly I got confused in distinction between Python and the Hoc. I do programming in Python and access the Hoc with neuron import h as you mentioned. It still remains a mistery to me how to modify proc advance() from Python in order to insert in it the call to Python's function at every time step. Here is more detailed code:

I have a main.py file which creates an instance of the Python class:

Code: Select all

mypopulation = Population(cellTemplate = h.L2_PC, nCells = 3) 

the class Population has a member function calc_csd which supposed to return CSD calculated at every time step from cells on a given node.

Code: Select all

    def calc_csd(self):
        
        for sec in self.cell[0].dendritic:  # for now use only a single cell for CSD calculations
            for jseg,seg in enumerate(sec):    
                self.i_mem_seg[jseg] = seg.i_membrane*h.area(seg.x) # get membrane current for the segment
               
        return dot(self.W,self.i_mem_seg) #  both W and i_mem_seg are matrixes 
then, I assume, I need to run the simulation with the following commands in main.py:

Code: Select all

#############################################################
########    Simulation control    ##########################
{pc.set_maxstep(10)}
h.stdinit()
{pc.psolve(h.tstop)}
###############################################################
But, I still do not understand how to modify the Hoc function proc advance() from Python to include a call to Python's mypopulation.calc_csd() on every time step.

Also, you mentioned to use run(). But will run() work for parallel?

I need to sum CSD calcuated by each node for the cell located on that node to get a total CSD for the population of cell allocated on multiple nodes. However, I am not sure at which point would I need to do that.

I registered for parallel NEURON course and looking forward to learning from that course.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Saving vector at each time step - not associated with ce

Post by hines »

If you use the pybs.mod I showed above then you should not modify proc advance. Proc advance is not used by pc.psolve
If you decide you do not want to use pybs.mod but instead modify proc advance, then you can say, in the python world,

Code: Select all

from neuron import h
#assuming 'mypopulation' exists in python as a variable in the __main__ module.
h.load_file('nrngui.hoc")
h('''
objref po
po = new PythonObject()
proc advance() {
  fadvance()
  po.mypopulation.calc_csd()
}
Note that the above shows how to call a python method in a python object from hoc if the reference to the python object is in the __main__ namespace.
But I'm doing nothing with the return value so it is being lost.

h.run() should work in parallel. For parallel, I prefer the idiom that uses pc.psolve()

Is W a 1-d numpy array? It appears that i_mem_seg is a 1-d numpy array. Then I presume the return value fo calc_csd is a scalar. is that correct?
If so, it seems to me that calc_csd, instead of returning the value, should append it to a HOC Vector.
Then that Vector at the end of the simulation would be the trajectory of that node's contribution to the total CSD. Then at the end of the simulation you could say
pc.allreduce(csdvec, 1)
and the csdvec on every node would be the sum of the original csdvec from every node. cf
http://www.neuron.yale.edu/neuron/stati ... .allreduce
ted
Site Admin
Posts: 6289
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Saving vector at each time step - not associated with ce

Post by ted »

hines wrote:There is going to be a three day NEURON course in parallelization in San Diego in the latter part of June. http://www.neuron.yale.edu/neuron/stati ... d2014.html
and the tutorial notes will be ready for that.
The parallel simulation course is a two day course--starts Wednesday, June 25 at 9 AM and ends Thursday, June 26 at 5 PM.
Post Reply