Does POINTER make NET_RECEIVE slower?

NMODL and the Channel Builder.
Post Reply
fietkiewicz
Posts: 18
Joined: Thu Jun 27, 2013 5:13 pm
Location: Hobart and William Smith Colleges
Contact:

Does POINTER make NET_RECEIVE slower?

Post by fietkiewicz »

I have a small NMODL system that uses NET_RECEIVE with WATCH in order to force a lower limit for a state variable. I want to separate the two equations into different .mod files by adding a POINTER, but this seems to dramatically reduce the efficiency and increase the run time by a few orders of magnitude.

My test system has two equations. One is an algebraic equation that defines a variable 'b' that is just an oscillator. The other is a differential equation for variable 'a' that is driven by 'b' but which enforces a lower limit of zero for 'a'. The goal of this endeavor is to use the same approach for a much larger model (i.e. many more differential equations), such that major components of the system are separated into separate .mod files. This will require using POINTER to access state variables between each POINT_PROCESS.

Here is a version with both equations in a single .mod file:

Code: Select all

NEURON {
	POINT_PROCESS equations
}

PARAMETER {   :: Declare and set any parameters required for this mod file here.
	b0 = 1.0  :: Scaling parameter
	w = 0.628 :: Frequency parameter
}

STATE { a b } :: Declare state variables here.

BREAKPOINT {
	SOLVE states METHOD derivimplicit :: See template for instructions.
}

INITIAL {
	a = 1.0 :: Set initial value of state variable.
	b = 1 :: Set initial value.
	net_send(0,1)
}

DERIVATIVE states {
	b = b0 * cos(w * t) :: Algebraic equation
	a' = a * (1 - a) - b :: Differential equation
}

NET_RECEIVE (dummy_weight) {
	if (flag == 1) {
		WATCH (a < 0.0) 2
	}
	else if (flag == 2) {
		net_event(t)
		a = 0.0
	}
}
The following .hoc code uses the above NMODL:

Code: Select all

create model
access model
objref obj
obj = new equations(0.5)
objref cvode
cvode = new CVode(1)
cvode_active(1)
The performance problem occurs when the two equations are placed in separate .mod files. Below are the two separate .mod files, named 'brain' and 'body', respectively, where a POINTER is used in 'brain' to access the variable 'b' from 'body'.

Code: Select all

NEURON {
	POINT_PROCESS brain
	POINTER b
}

ASSIGNED { b } :: Declare all pointer variables here.

STATE { a } :: Declare state variables here.

BREAKPOINT {
	SOLVE states METHOD derivimplicit :: See template for instructions.
}

INITIAL {
	a = 1.0 :: Set initial value of state variable.
	net_send(0,1)
}

DERIVATIVE states {
	a' = a * (1 - a) - b :: Differential equation
}

NET_RECEIVE (dummy_weight) {
	if (flag == 1) {
		WATCH (a < 0.0) 2
	}
	else if (flag == 2) {
		net_event(t)
		a = 0.0
	}
}

Code: Select all

NEURON {
	POINT_PROCESS body
}

PARAMETER {   :: Declare and set any parameters required for this mod file here.
	b0 = 1.0  :: Scaling parameter
	w = 0.628 :: Frequency parameter
}

STATE { b } :: Declare state variables here.

BREAKPOINT {
	b = b0 * cos(w * t) :: Algebraic equation
}

INITIAL {
	b = 1 :: Set initial value.
}
The following .hoc code uses the above NMODL:

Code: Select all

create model
access model
objref objBrain, objBody
objBrain = new brain(0.5)
objBody = new body(0.5)

// Set pointer
setpointer objBrain.b, objBody.b

objref cvode
cvode = new CVode(1)
cvode_active(1)
The 2nd example runs a couple orders of magnitude slower, and I assume it is due to using the POINTER. Is this expected? Thanks in advance for any feedback.
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Does POINTER make NET_RECEIVE slower?

Post by ted »

I think the problem is related to the fact that not even the original mod file does quite what one might expect.
Use the GUI to add a RunControl panel, plus two graphs:
one that plots obj.a vs. t (create a Voltage axis graph, Delete v, then use Plot what? to add obj.a to the plotlist),
and
one that plots obj.b vs. t (create another Voltage axis graph, etc.)

Save this user interface to a file called rig.ses (you'll need it again).

Now run a simulation. Note that obj.a drops significantly below the x axis (because the adaptive integrator advances by a big step) before jumping up to 0. Also note that plotting seems to slow down at that point, until obj.a visibly rises above the x axis. You might also note that the obj.b trace seems to thicken a bit over the interval when obj.a is on top of the x axis.

Next insert some printf statements into your equations mod file.

Code: Select all

At the top of the BREAKPOINT block insert printf("\nBREAKPOINT  t == %g  a == %g  b == %g\n", t, a, b)
At the top of the INITIAL block insert printf("\nINITIAL  t == %g\n", t)
At the top of the DERIVATIVE block insert printf("\nDERIVATIVE  t == %g\n", t)
At the top of the NET_RECEIVE block insert printf("\nNET_RECEIVE  t == %g\n", t)
And add xopen("rig.ses") to the end of your hoc file.

Execute the hoc file, then in the RunControl panel click Init.
OK so far.
Next click on Single Step. Do it again. And again. The simulation is advancing, but the printf in the BREAKPOINT block isn't being called. For this mechanism that's just a cosmetic problem; we'll return to it later.

Keep advancing by single steps while monitoring the values of t printed to the terminal. Notice that, after obj.a is reset to 0, the integration time steps become very short. Also notice that the NET_RECEIVE printf statement starts reporting flag == 2 at every other "single step". This thing is pumping out a lot of self-events to the event delivery system.

Why?

Add a new graph to the user interface. Make it plot obj.a vs. t. Use this graph's Set View to change the x axis range to 1.3...2.8, and its y axis range to -1e-4...1e-4. Click on Init & Run and see . . . what?

Add another new graph that plots obj.a vs. t, with x axis range 2.0...2.001 and y axis range -5e-5...5e-5. Run a new simulation. Notice that after a is reset to 0, it ramps back down to a very small negative value (small because the integrator chooses a small time step), which triggers the WATCH statement to launch a new self-event, which in turn forces a to be reset to 0 etc. etc. etc.

Now you see why execution slows down and why there are so many self-events. How to proceed at this point depends on what you actually want this mechanism to do.


Now for an aside:

As promised earlier, we return to "how to force execution of the printf statement in the BREAKPOINT block?" For this particular example, this is just a cosmetic issue, because the simulation executes just fine, so the SOLVE statement is doing what it should. But we might as well put some rouge on this pig's cheeks.

The whole purpose of the BREAKPOINT block is to specify how to solve the mechanism's equations, and to calculate whatever current is generated by the mechanism. The former is accomplished by the SOLVE statement(s) in the BREAKPOINT block. The latter requires the existence of a statement in the BREAKPOINT block that assigns a value to a current that is generated by the mechanism. If such a statement is not present, other non-SOLVE statements in the BREAKPOINT block are ignored. Must have made sense at some time in the remote past.

The easiest way to force execution of the non-SOLVE statements in a BREAKPOINT block is to declare a dummy variable that is a NONSPECIFIC_CURRENT and assign it a value of 0 in the BREAKPOINT block. You can accomplish that like so:
1. declare a new assigned variable called dummy; for this particular example, add this block to the mod file
ASSIGNED { dummy }
2. in the NEURON block insert this statement
NONSPECIFIC_CURRENT dummy
3. in the BREAKPOINT block insert the statement
dummy = 0

Now compile the mechanism, and finally launch your model and execute a simulation.
fietkiewicz
Posts: 18
Joined: Thu Jun 27, 2013 5:13 pm
Location: Hobart and William Smith Colleges
Contact:

Re: Does POINTER make NET_RECEIVE slower?

Post by fietkiewicz »

Thanks very much, Ted. Yes, that makes sense once I look at the tiny oscillations at the the tiny time steps. The print statements also help to appreciate the dramatic change in time step at that point. I have two followup questions:

First, am I limited to using fixed time step for this type of boundary-limiting model? For context, I'm working with biophysical models that include physical boundary limits (e.g. maximum muscle length, bone that meets a physical barrier, etc.). The initial question with the simple model was, "What is the best way to handle such discontinuities in NEURON?". The internal question at the time of my post was, "Can variable time step be used efficiently for this?" It seems the answer is "no". Previously, I just used fixed time step and reset the state variable inside the BREAKPOINT block like this:

Code: Select all

NEURON {
	SUFFIX fixedstep
}

PARAMETER {
	b0 = 1.0  :: Scaling parameter
	w = 0.628 :: Frequency parameter
}

STATE { a b }

BREAKPOINT {
	SOLVE states METHOD derivimplicit
	if (a < 0) { a = 0.0 }
	b = b0 * cos(w * t) :: Algebraic equation
}

INITIAL {
	a = 1.0
}

DERIVATIVE states {
	a' = a * (1 - a) - b :: Differential equation
}
This can be used with the following hoc commands:

Code: Select all

create model
access model
insert fixedstep
This works fine, and there's no problem with separating the 'a' and 'b' components into separate mod files with a POINTER. I considered using variable time step because of positive experience with other integrators (XPP and Matlab's ode15s). On those platforms, one can put in simple "if-else" statements to reset the states, and the variable time step machinery somehow seems to avoid getting bogged down. So that's why I tried the approach with WATCH, which seems to be the only way to even enable NEURON to use variable time step properly. So to repeat the 1st question, am I limited to using fixed time step for this?

Second, as a potentially moot question, does it make sense that the use of a POINTER would exacerbate the inefficiency? I haven't tested the efficiency impact of using a POINTER when using fixed time step, but this makes me wonder whether that would have much impact for longer simulations. I can test this myself, but I might as well ask for your prediction.

Thanks for your thorough look at my 1st question. Haven't posted a question for 9 years, and it's inspiring to see that you're still out there helping everyone!
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Does POINTER make NET_RECEIVE slower?

Post by ted »

Two questions:
1. a must not drop below 0 regardless of what b does?
2. if a == 0 and the RHS of
a' = a * (1 - a) - b
becomes positive, a should follow?
fietkiewicz
Posts: 18
Joined: Thu Jun 27, 2013 5:13 pm
Location: Hobart and William Smith Colleges
Contact:

Re: Does POINTER make NET_RECEIVE slower?

Post by fietkiewicz »

Yes, and yes. It's a somewhat arbitrary test of enforcing boundary limits on a state variable. The variable 'a' could possibly mimic the average firing rate of a neuronal population driven by some other input. In that interpretation, the firing rate should never be negative.
fietkiewicz
Posts: 18
Joined: Thu Jun 27, 2013 5:13 pm
Location: Hobart and William Smith Colleges
Contact:

Re: Does POINTER make NET_RECEIVE slower?

Post by fietkiewicz »

Ted, if it helps, I'll try to condense me questions and recent reply. The questions are:

1. Assuming a hard if-then reset is absolutely necessary, is the best practice to generally ensure the integrator is using fixed time-step? And if so, is it reasonable to use an if statement in the BREAKPOINT block? My conclusion is that the answers are both "yes".

2. Does the use of POINTER decrease efficiency in NMODL? The effect was dramatic in my extreme first case that used NET_RECEIVE, WATCH, and variable time-step together. Your analysis really clarified why the variable time-step method by itself was hurting the efficiency. So the effect of the POINTER appears to have merely been amplified.

Regarding my example system, you asked if: (1) 'a' must not drop below 0 regardless of what 'b' does? The answer is "yes". (2) If a == 0 and the RHS becomes positive, 'a' should follow? The answer is yes. One interpretation of 'a' is the average firing rate of a driven neuronal population, where the firing rate should never be negative.

Thanks in advance as I try to understand best practices for dealing with discontinuities in NMODL.
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Does POINTER make NET_RECEIVE slower?

Post by ted »

Two questions:
1. a must not drop below 0 regardless of what b does?
2. if a == 0 and the RHS of
a' = a * (1 - a) - b
becomes positive, a should follow?
Yes, and yes. It's a somewhat arbitrary test of enforcing boundary limits on a state variable.
Using WATCH to detect when a drops below 0 keeps the adaptive integrator from missing when a < 0, but

1. it doesn't determine the exact instant when a becomes negative, so if the time step is big enough and the RHS of a' = . . . is negative enough, a can drop significantly below 0 before it is reset to 0
2. even though a is reset to 0, it won't stay there because of the broad, deep negative troughs in the forcing function -b. And we know what that causes (time-wasting high frequency sawtooth oscillation of a).

One way to avoid all this would be to apply a squashing function to -b that attenuates the negative troughs so that a will remain positive. This is easily done with e.g. the logistic function, but location of its midpoint, and its slope at that point would have to be determined empirically--and if you change -b's parameters or underlying function, you'd have to tinker with the logistic function's parameters.* Also, the logistic function's parameters will affect the value of a at the end of the trough, and that could have a noticeable effect on the subsequent time course of a.

*--I have some code that does this, and will post it after running some more tests.
is the best practice to generally ensure the integrator is using fixed time-step
"Best" in what sense? Fixed time step is very forgiving of all kinds of sloppy tricks, but what if you're dealing with a model that has one or more ODEs which require adaptive integration for stability or accuracy?


POINTERs generally don't affect computational burden.
fietkiewicz
Posts: 18
Joined: Thu Jun 27, 2013 5:13 pm
Location: Hobart and William Smith Colleges
Contact:

Re: Does POINTER make NET_RECEIVE slower?

Post by fietkiewicz »

Thanks, Ted! Yes, the squashing function does the trick to enable variable time step. As you pointed out, it's awkward because things must be tailored just right. As a proof of concept, I flipped the sign on the driving force (+b) and scaled it considerably (b0 = 100). For example:

Code: Select all

NEURON {
	POINT_PROCESS equations
}

PARAMETER {   :: Declare and set any parameters required for this mod file here.
	b0 = 100.0  :: Scaling parameter
	w = 0.628 :: Frequency parameter
}

STATE { a b } :: Declare state variables here.

BREAKPOINT {
	SOLVE states METHOD derivimplicit
}

INITIAL {
	a = 1.0 :: Set initial value of state variable.
	b = 1 :: Set initial value.
}

DERIVATIVE states {
	b = b0 * cos(w * t) :: Algebraic equation
	a' = a * (1 - a) + b / (1 + exp(-100 * b)) :: Differential equation
}
So that's enough to prove that variable time step is plenty fast, as expected. This was a toy model, so it's not worth the effort to try perfecting the squashing rate (-100), which would affect the rest of the system. I have a more complicated model that I'll consider this for next. Coincidentally, I had actually tried to do this trick earlier but it wasn't working properly, so your suggestion motivated me to dig it back up and fix it. I think I'm good for now, as I prepare to try incorporating this into my big model. Thanks!
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Does POINTER make NET_RECEIVE slower?

Post by ted »

You got the basic idea, even though in retrospect my discussion about how to deal with the forcing function was actually rather misleading. I should have written something like this:

The inherent dynamics of a are goverened by the first term on the RHS of the ODE, that is, a * (1 - a). The second term, -b, is a forcing function. The problem is how to prevent a from becoming negative, while preserving the effect of -b on a as much as possible. That is, how to allow -b to affect a as long as a remains > 0, but stifle the effect of -b when a risks of going negative.

Using a logistic function with b as an argument wasn't going to work. Why? Plots of a and b showed that (1) a became negative at about 1.4 ms, while b was still quite positive, and that (2) a became positive again at about 3 ms, when b was small but in the early part of its negative phase. The logistic function only has one center--put that center at or near the b value where a would have gone negative, and you mangle the time course of a after that point. b won't affect a again until b rises above that positive value, and that won't happen until about 7 ms later. And it's not useful to center the logistic function at more negative values of b, because a will go negative (or oscillate, if the WATCH statement is being used to reset a to 0).

Instead, squash the driving function with a logistic function based on a. That is, change the ODE from

a' = a * (1 - a) - b(t)

to

a' = a * (1 - a) - b(t) * expit(k*(a-c))

The squashing function is

Code: Select all

FUNCTION expit(x) {
  expit = 1/(1 + exp(-x))
}
and c and k in the call expit(k*(a-c)) are range variable PARAMETERs that specify the center and steepness of the squashing function, respectively.

Notice that I also changed STATE b to

Code: Select all

FUNCTION b(x) {
  b = b0 * cos(w * x)
}
because nrnivmodl kept complaining about assigning a value to a state variable in the STATE block.

These changes let me get rid of the NET_RECEIVE block and the net_send statement in the INITIAL block.

A few tests showed that k = 200, c = 0.03 were adequate to prevent a from becoming negative. I verified this graphically and by inserting the following extra code into the mod file:

RANGE a_min, t_min

where a_min and t_min are declared as ASSIGNED variables,

and this new block

Code: Select all

BEFORE STEP {
  if (a < a_min) {
    a_min = a
    t_min = t
  }
}
An aside: BEFORE STEP is the safest place to do extrema checks or capture variables. Doing such checks in the BREAKPOINT block risks being confounded, since the adaptive integrators may try several different time steps before settling on one.

Then I added this bit of hoc to the main program

Code: Select all

// replace standard run system's proc run()
proc run() {
  running_ = 1
  stdinit()
  continuerun(tstop)
  printf("k %g  c %g  a_min %g  t_min %g  run time %g\n", \
    obj.k, obj.c, obj.a_min, obj.t_min, realtime)
}
and every simulation run reported what happened.
fietkiewicz
Posts: 18
Joined: Thu Jun 27, 2013 5:13 pm
Location: Hobart and William Smith Colleges
Contact:

Re: Does POINTER make NET_RECEIVE slower?

Post by fietkiewicz »

Thanks, Ted! Yes, your version is very faithful to my original. Thanks for taking the time to tweak the logistic parameters and make them work with the original parameters for 'b'. Much nicer than my rough demo.
Post Reply