NetStim causes SaveState crash

When Python is the interpreter, what is a good
design for the interface to the basic NEURON
concepts.

Moderator: hines

Post Reply
ajpc6d
Posts: 33
Joined: Mon Oct 24, 2022 6:33 pm

NetStim causes SaveState crash

Post by ajpc6d »

The code below is meant to run a simulation in small chunks (basically useless for the toy script here, but useful to control RAM consumption for very large datasets). The h.ExpSyn() / h.NetStim() / h.NetCon() trio causes NEURON to deliver the error message
NEURON: ExpSyn[0] :Event arrived out of order. Must call ParallelContext.set_maxstep AFTER assigning minimum NetCon.delay
This would seem to be gibberish, since not only am I not running ParallelContext, I have no reason to do so.
The problem appears to begin in the interactions between h.NetStim().start, NetStim().number, and SaveState().
Thoughts?

Code: Select all

from neuron import h 
        import numpy as np
        import matplotlib.pyplot as plt
        h.load_file("stdrun.hoc")

        # define the model 
        cell = h.Section(name='cell')
        cell.nseg = 5
        cell.L = 1e2
        cell.diam = 4
        cell.insert(h.hh)

        # define the synaptic system
        syn = h.ExpSyn(cell(0.5))
        syn.tau = 0.9
        syn.e = -50
        syn.i = 1e-9
        ns = h.NetStim()
        ns.interval = 5 # this represents the delay between pre-synaptic events
        ns.number = 2 # fails if > 1
        ns.start = 0 # does not appear to make a difference
        ns.noise = 0
        nc = h.NetCon(ns, syn, sec=cell)
        nc.threshold=0
        nc.delay=1.2 # this represents the delay of synaptic transmission
        nc.weight[0]= 1e-1 # if weight[0] < 0, the connection is off

        # the tmp_v and tmp_t Vectors are over-written on each batchrun() loop
        # therefore we also declare the v and t Vectors, to which each round of
        # tmp_v and tmp_t are appended. 
        tmp_v = h.Vector().record(cell(0.5)._ref_v)
        tmp_t = h.Vector().record(h._ref_t)
        v = h.Vector()
        t = h.Vector()

        # declare the SaveState object
        savestate = h.SaveState()

        def batchrun_file(iters=3):
            # batchrun() performs a simulation in parts, where each part's duration is 
            # a function of 'chunk' and the data read in from a .xlsx file of time-domain waveform data
            for i in range(iters):
                # re-initialize at each loop
                h.finitialize(-67)
                
                # h.CVode().print_event_queue()
                    
                if i > 0:
                    # after the first loop, restore the previous state
                    savestate.restore(1)
                if h.CVode().active():
                #     # ensures cvode consistency
                    h.CVode().re_init()
                else:
                    h.fcurrent()

                # frecord_init() is undocumented, but appears to be necessary for the recording Vectors
                # to coordinate time-stamping between loops
                h.frecord_init()
                # the length applied to continuerun() must account for the inherited h.t value and increase
                # accordinly
                h.continuerun(5*(i+1))
                # after the simulation chunk, save the status
                savestate.save()
                
                # append the (temporary) recording Vectors to the permanent Vectors
                v.append(tmp_v.c())
                t.append(tmp_t.c())

        batchrun_file(iters=10)
        plt.plot(t,v)
        plt.show()
ajpc6d
Posts: 33
Joined: Mon Oct 24, 2022 6:33 pm

Re: NetStim causes SaveState crash

Post by ajpc6d »

I haven't yet been able to resolve this issue
ted
Site Admin
Posts: 6315
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: NetStim causes SaveState crash

Post by ted »

I don't think you have to do SaveState. I just uploaded

Code: Select all

https://www.neuron.yale.edu/ftp/ted/neuron/segmentedrun.zip
which contains hoc and python examples that include event-driven synaptic transmission. Try either one by executing
nrngui demo.hoc
or
python -i demo.py
ajpc6d
Posts: 33
Joined: Mon Oct 24, 2022 6:33 pm

Re: NetStim causes SaveState crash

Post by ajpc6d »

I've avoided hoc as much as I could thus far, so let me check that I understand what's happening between the .py and .ses files.

pyrig.ses recreates the h.IClamp() instance (which can also be created in Python), the RunControl window, and the h.Graph() object.

pysyndrive.ses creates the h.NetStim() and h.ExpSyn() objects. Again, these could alternatively be created in Python. Neither .ses file is necessary, strictly speaking.

h.tstop_changed() is the lynchpin of this alternative to SaveState, correct? I don't see this function documented anywhere, so I'm not entirely sure what it does -- apart, ostensibly, from changing h.tstop() to something.
ted
Site Admin
Posts: 6315
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: NetStim causes SaveState crash

Post by ted »

Sure, everything can be done entirely by writing code--hoc or Python or some combination of both--but NEURON's InterViews-based GUI provides many powerful tools for getting a lot done with little or no programming effort. And these tools give you immediate access to key parameters. Very useful for interactive development, debugging, and exploration of models. See Exploit NEURON's GUI tools viewtopic.php?t=178

And that's what I used the GUI tools for: to construct a simulation in which "endogenous" activity of a model cell is punctuated by synaptic events, and to use that as the basis for demonstrating how to execute a long simulation as a sequence of shorter runs.
h.tstop_changed() is the lynchpin of this alternative to SaveState, correct?
Not at all. Its purpose is purely cosmetic. All it does is ensure that Graphs that are updated during each fadvance (i.e. graphs that have been appended to the run time system's graphList[0]--that's h.graphList[0] to you) will have an abscissa (x axis) that runs from 0 to tstop (that's h.tstop to you).

No, the key to this whole thing is that SaveState turns out to be unnecessary for running long simulations. Unless you intend to terminate a simulation now, then return to it later and expect to pick up where you left off.
ajpc6d
Posts: 33
Joined: Mon Oct 24, 2022 6:33 pm

Re: NetStim causes SaveState crash

Post by ajpc6d »

I think I need help understanding the relationship of h.run(), h.continuerun(), and h.tstop. Below is a snippet of your code containing calls for all three.

Code: Select all

def erun():
  # prepdatastores()
  h.tstop = EPOCHDUR
  h.tstop_changed() 
  myinit() # model-specific initialization
  for ii in range(NUM):
    h.continuerun(h.t + EPOCHDUR)
    housekeeping() # stuff to do at end of each epoch

erun()

## vstore, tstore contain the concatenated epoch data

##### test by comparing with results of a single run of NUM*EPOCHDUR ms

h.tstop = NUM*EPOCHDUR
h.tstop_changed()
h.run()
At the end of the code, h.tstop is set to a numerical value, and then h.run() is called with no arguments. From this, I gather that setting h.tstop enacts some internal change in NEURON.
But in the erun() function, h.tstop is set to EPOCHDUR, and h.continuerun() is initialized with the argument of "h.t + EPOCHDUR", which ostensibly will sum to a different value with each iteration. So h.continuerun() requires an input parameter, suggesting that setting h.tstop enacts a necessary but insufficient internal change in NEURON.
Presumably h.run() and h.continuerun() are doing very different things under the hood. But how do I know which to use in a particular case? Or more concretely, why are they used in the way they are in this example?
ted
Site Admin
Posts: 6315
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: NetStim causes SaveState crash

Post by ted »

Good questions.

tstop and continuerun are part of NEURON's standard run system, much of which is implemented in hoc. You'll find the hoc component of the standard run system in a file called stdrun.hoc. You can learn a lot by finding and browsing through stdrun.hoc (hints: use your OS's file search to find it; the path to it will include
share/nrn/lib/hoc
).

In particular, you can begin to guess the answers to some of your own questions.

For example, search stdrun.hoc for
proc run
Notice that run() calls stdinit(). So find
proc stdinit
and discover what it calls. It will be illuminating to make a "call table" for proc run(), i.e. an indented outline of the procedures and functions that run() calls.

< SPOILER ALERT >
proc run() initializes a model (and sets t to 0) and then executes a simulation that ends when t >= tstop.
proc continuerun(foo) executes a simulation without setting t to 0 or initializing the model, and stops when t >= foo

So what do you think the for loop in erun() does? If you're not sure, print the value of h.t right after the continuerun call, and see what happens when you call erun().
how do I know . . . why are they used . . .
That's what experience, judgement, and learning to think your way through an algorithm are about. The latter is akin to mathematical induction.
Post Reply