I am working on a a python extension of the Hahn & McIntyre model
http://senselab.med.yale.edu/ModelDb/sh ... del=127388 for the purpose of performing data analysis at each timestep of the model. The model uses ParallelNetManager for parallelization, and I am attempting to use the BeforeStepCallback method (as described here by Hines:
viewtopic.php?f=2&t=3389#p14347) to call a python function at each timestep.
To start with, I created a python wrapper for the H&M Model. This code runs the model using any number of processors, and successfully collects and saves all the spikes at the end of the run. (For clarification, the 'py_parBGLaunch.hoc' file is the same as parBGLaunch.hoc from the ModelDB, except it no longer asks for a launch file. parBGLaunch.hoc completely sets up the model, but does not run it, the pBGLaunch.000 file is responsible for initializing and running the model).
Code: Select all
from mpi4py import MPI
from neuron import h, load_mechanisms
comm = MPI.COMM_WORLD
load_mechanisms('H&M/') #Loads mechanisms (Including BSCallback)
h.load_file('H&M/py_parBGLaunch.hoc') #Loads model
h.getOutput('H&M/pNets/dat',0,1) #Loads parameter set
h.tstop = 1000
h.pnm.pinit()
h.pnm.prun()
h.pnm.gatherspikes()
h.saveSpikes()
h.pnm.pc.runworker()
h.pnm.pc.done()
I then added an empty beforestep() function, as shown below, which also runs on any number of processors without issue.
Code: Select all
from mpi4py import MPI
from neuron import h, load_mechanisms
comm = MPI.COMM_WORLD
def beforestep():
pass
load_mechanisms('H&M/') #Loads mechanisms (Including BSCallback)
h.load_file('H&M/py_parBGLaunch.hoc') #Loads model
h.getOutput('H&M/pNets/dat',0,1) #Loads parameter set
bs_sec = h.Section()
bscallback = h.beforestep_callback(bs_sec(0.5))
bscallback.set_callback(beforestep)
h.tstop = 1000
h.pnm.pinit()
h.pnm.prun()
h.pnm.gatherspikes()
h.saveSpikes()
h.pnm.pc.runworker()
h.pnm.pc.done()
Enough of the preamble. The problem I am having arises in the beforestep() code. Below is a brief sudo-code outline of what I need to do in beforestep(). First, the master collects the spikes from all the workers, and the performs a quick data analysis on the spikes, producing an action to be taken. The workers initialize an empty variable, action. Then, a barrier, syncing all processors, the action is broadcast from the master to all workers, and all processors execute action.
Code: Select all
def beforestep():
if comm.Get_rank() == 0:
h.pnm.gatherspikes()
action = [some analysis code]
else:
action = None
comm.barrier()
comm.bcast(action,root=0)
[execute action]
Obviously, I did not attempt to implement this all at once. I started with just performing the data analysis on the spikes on the master worker, shown below. This code runs without error on any number of processors (but when run on any more than one processor, not all the spikes are available to the master)
Code: Select all
def beforestep():
if comm.Get_rank() == 0:
action = [some analysis code]
I then attempted to use h.pnm.gatherspikes() to get all of the spikes from the workers to the master. This is where the problem first arises. The sudo-code below runs on one processor, but will stall at the first timestep for any n>1 processors.
Code: Select all
def beforestep():
if comm.Get_rank() == 0:
h.pnm.gatherspikes()
action = [some analysis code]
I believe the issue arrises because of the ParallelNetManager. In py_parBGLaunch.hoc, near the end, pnm.pc.runworker is called. From what I understand, this sends the workers into an infinite loop, asking for tasks from the master, while the master returns immediately. Then, in the python code, I call h.pnm.pinit() and h.pnm.prun(), initializing and running the model. When the code enters beforestep(), and calls h.pnm.gatherspikes(), the master asks for spikes from the workers, but the workers are stuck in infinite loops awaiting tasks from the master! How do I avoid this problem?
Interestingly, h.pnm.gatherspikes() does not produce a problem after the model has finished running