Implementing delays with continuous variables

NMODL and the Channel Builder.
Post Reply
JBall
Posts: 18
Joined: Tue Jun 15, 2010 8:47 pm

Implementing delays with continuous variables

Post by JBall »

I have an interesting problem I'm trying to work with here, and I've tackled it several times conceptually with no luck, but I am hoping someone here can help.

I need to implement a delay for a continuous value during simulations. I won't get into the details of the project, but an equivalent exercise would be to imagine a gap junction with a delay. Netcon doesn't help because I need to execute this at every time step. I've thought about using circular buffers to make it happen, but I have a couple of problems with this: 1) I'm using the variable time step method, so a fixed buffer size would be problematic, and 2) I'd like to make as much of this happen in compiled code (NMODL) vs. hoc code as possible to minimize overhead, because I intend to use this delay element many times in the model.

An idea I had was inspired by delay lines implemented in a firing-rate model in which input is passed through a first-order low-pass filter, which gives me the required delay, but also attenuates quickly changing input, which is a problem.

So, I may be thinking entirely too hard about this, but I'm very interested to see if anybody has solved a similar problem or if commands exist with Neuron to partially handle this for me.
ted
Site Admin
Posts: 6305
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Implementing delays with continuous variables

Post by ted »

It should be straightforward to implement a delay with a ring buffer that works with either fixed time step or adaptive integration. The trick is to use the event delivery system to compel fixed-interval input sampling and output generation. This would work quite nicely with fixed time step integration, and it would also "work" with adaptive integration--subject to the caveat that adaptive integration would treat the output as a staircase function rather than a continuous function of time.
ted
Site Admin
Posts: 6305
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Implementing delays with continuous variables

Post by ted »

Accuracy would only be first order in time because the delayed variable would be treated as constant between sampling instants. The delay would be implemented in a POINT_PROCESS (because of the use of self-event to drive sampling at regular intervals). Size of the ring buffer would have to be declared prior to compilation, unless VERBATIM statements can be used to access elements of a hoc Vector. Let me know if you need some NMODL code to illustrate how this might be done.
JBall
Posts: 18
Joined: Tue Jun 15, 2010 8:47 pm

Re: Implementing delays with continuous variables

Post by JBall »

I've thought about this based on your advice, but I haven't figured out how to make it happen. An idea that came to me was the possibility of using an nmodl TABLE with a time (modulus delay) dependency as a sort of ring buffer, but this would require writing individual values of my delay variable at every time step in the mod file, which is something I don't know how to do.

I apologize, but yes, some explicit code might help a lot. I feel like I understand the basics of hoc and nmodl code pretty well, but this escapes me completely right now.
ted
Site Admin
Posts: 6305
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Implementing delays with continuous variables

Post by ted »

Get http://www.neuron.yale.edu/ftp/ted/neuron/delay.zip, unzip it into an empty directory, compile the mod file, and run inittest.hoc

Use the PointProcessManager to specify different values for num and invl (number of samples between input and output, and interval between samples--total delay = num*invl) and observe the result. Before using in your own work, be sure to read delay.mod and pay particular attention to the caveats. Initialization is an important issue, often overlooked, and delay mechanisms can pose special problems for initialization.
JBall
Posts: 18
Joined: Tue Jun 15, 2010 8:47 pm

Re: Implementing delays with continuous variables

Post by JBall »

Ted,

Thanks so much for your help. After some fiddling I think I understand how this works. I think this really highlights how new I am to the software--for instance, I had no idea that using arrays in NMODL code was allowed. Anyway, I sat down and created this modified version of the mod file that implements a first-order hold rather than zero-order (I've linked it below). This helps alleviate the stair-step issue, but of course still gives inaccuracies for long sampling intervals. I thought about putting together a more proper discrete-time low-pass filter reconstruction method, but I believe that any issue that could be overcome by this method could just as easily be fixed by reducing the sampling interval in this file.

http://web.missouri.edu/~jmbdb7/Neuron/delay2.zip
ted
Site Admin
Posts: 6305
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Implementing delays with continuous variables

Post by ted »

Couldn't retrieve your zip file; will try later.

My implementation used POINTERs for both the input and output variables. This is very general but exposes the input variable to corruption if the ring's contents are not assigned an appropriate value. For example, an earlier draft of my code, before the version I put on the WWW, initialized the ring's contents to 0, so when I used membrane potential of a single compartment model with HH currents as the input variable, the
in = x[jin]
statement in the INITIAL block of the delay mechanism forced membrane potential to jump from the initialized value of -65 mV to 0 mV at the start of the simulation--provoking an unwanted spike. To circumvent this, I added the x0 parameter which, in the usage example, is used to set the ring's contents to -65, i.e. the same as the initialized membrane potential.

An alternative implementation of the delay mechanism that would not be susceptible to this particular problem would be "hard coded" to use a known RANGE variable as the input variable--e.g. membrane potential, or an ionic current, or the concentration of an ionic species or some "second messenger." This would not have a variable called "in", and the INITIAL block would not have a
in = x[jin]
statement. However, the ring's contents would still have to be initialized in order to prevent undesired perturbation of the "out" variable.

In the biological mechanisms with which I am familiar, there isn't anything quite like an analog delay line; delays are usually a consequence of a cascade of reactions, regenerative or dissipative signal propagation, or accumulation/depletion mechanisms, and can be represented by a few ODEs (or reactions) or a combination of those and events.
JBall
Posts: 18
Joined: Tue Jun 15, 2010 8:47 pm

Re: Implementing delays with continuous variables

Post by JBall »

I've implemented this into my simulation routine, and it works well.

However, in the process I've had to completely scrap using cvode, because the net_send interval and the time-step optimization routine fight each other and really slow the simulation down in order to land on the right point in time. I made an attempt to call the next iteration with net_send(dt,1), but that would require the routine to figure out how where in the buffer the desired data point is located. I tried doing this by rotating through the time buffer I created in the mod file, but it was a bit much for me. I think I'm going to stay with the fixed time step unless you can suggest something that would let me use the variable time step instead.

Have you been able to access the linked above file yet? If not and you are interested, I can try another upload site. Again, thanks for all of your help.
ted
Site Admin
Posts: 6305
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Implementing delays with continuous variables

Post by ted »

JBall wrote:I've had to completely scrap using cvode, because the net_send interval and the time-step optimization routine fight each other and really slow the simulation down
I thought that might happen with short sampling intervals. The problem is that every time the "out" variable changes, the result is an abrupt change of the variable to which "out" is a POINTER. This causes the adaptive integrator to reinitialize, which is similar to starting a new initial value problem and forces the integration time step to a very small value.
I made an attempt to call the next iteration with net_send(dt,1)
A mod file that uses dt is not compatible with NEURON's adaptive integrators; attempts to compile would produce a warning to that effect. I'm not sure the code would even run; would expect at least an error message.

So you're stuck with fixed dt.
Have you been able to access the linked above file yet?
Finally got it today.

The new version is extrapolating future values of the output variable from its two most recently calculated values. This is equivalent to the forward Euler method, which becomes unstable if the time step is too large. As far as the abrupt jump of the output variable that occurs at each sampling time, it may reduce or increase the jump (and is guaranteed to cause jumps on either side of maxima and minima) but it will eliminate the jump only when the first derivative of the input variable is constant.

So I wouldn't do it.

If "out" is not initialized, then its value at time t=0 will be accidental. During the first simulation run, the value of out will be 0, and on subsequent runs it will be whatever it was at the end of the previous run. This destroys the reproducibility of results that is essential for development and debugging--not good.

Those are the two most important things I have to say about the code. The following are NMODL programming tips in the context of this particular example.

MAXNUM is a constant. Arrays in NMODL have fixed size which must be specified at compilation. It does no good to declare MAXNUM to be RANGE, because each instance of the mechanism will have the same size ring buffer (whatever was hard coded into the source file at the time of compilation). No error messages are generated by modlunit or by compilation (too bad), but it's still not a good idea to declare MAXNUM to be RANGE because that fosters a conceptual error in the user (who may assign a new value to MAXNUM via a hoc statement and expect that this will have an effect--but it won't).

The declarations of t0 in the PARAMETER block and of tme in the ASSIGNED block need units--e.g.
to = 0 (ms)
and
tme[MAXNUM] (ms)
Omitting the units doesn't produce code that won't work, but it does interfere with modlunit's ability to detect other errors in the source code--it's always a good idea to check NMODL files with modlunit, but modlunit gags at the first thing that it finds.


One last question: is membrane potential or some ionic concentration or current the variable that you want to delay? I ask because if the answer is yes, a somewhat simpler implementation that deals with initialization differently would be more appropriate.
JBall
Posts: 18
Joined: Tue Jun 15, 2010 8:47 pm

Re: Implementing delays with continuous variables

Post by JBall »

The new version is extrapolating future values of the output variable from its two most recently calculated values. This is equivalent to the forward Euler method, which becomes unstable if the time step is too large. As far as the abrupt jump of the output variable that occurs at each sampling time, it may reduce or increase the jump (and is guaranteed to cause jumps on either side of maxima and minima) but it will eliminate the jump only when the first derivative of the input variable is constant.
If I understand you correctly, this isn't the way it works. It takes two previous values of the buffer and interpolates between them--it does not try to predict future values. So yes, while this incompatible with delays smaller than the sampling interval, it is a stable implementation (theoretically, at least--there may be something in its implementation I have missed, since I'm far from an expert programmer).

I'm not sure whether I got warnings by using dt, but I certainly didn't get errors. I did get some odd results, though.

Finally, I think you've realized this by now, but my delayed variable is a fake "synaptic current" from my firing-rate model. I take a pointer to a firing-rate (specified as a voltage) from a cell and use it to calculate a sort of average synaptic current. This delay allows me to hold on to the firing-rate for X ms and then deploy it to calculate the current. So, I'm delaying a value gathered from a pointer to a cell voltage.
ted
Site Admin
Posts: 6305
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Implementing delays with continuous variables

Post by ted »

JBall wrote:
The new version is extrapolating future values of the output variable from its two most recently calculated values. This is equivalent to the forward Euler method . . .
If I understand you correctly, this isn't the way it works. It takes two previous values of the buffer and interpolates between them--it does not try to predict future values. So yes, while this incompatible with delays smaller than the sampling interval, it is a stable implementation
I see now. Large sampling interval causes no stability issues, and reduces storage requirements, but reduces accuracy of the delayed waveform and probably has no speed advantage over cutting the sampling interval to dt.
Finally, I think you've realized this by now, but my delayed variable is a fake "synaptic current" from my firing-rate model. I take a pointer to a firing-rate (specified as a voltage) from a cell and use it to calculate a sort of average synaptic current. This delay allows me to hold on to the firing-rate for X ms and then deploy it to calculate the current.
Novel implementation of a network that uses a continuous variable to represent firing rate.
So, I'm delaying a value gathered from a pointer to a cell voltage.
If the POINT_PROCESS were attached to the presynaptic section, it could monitor the voltage in that section without having to use the surrogate POINTER variable "in". Should be doable by eliminating
POINTER in
from the NEURON block, adding the statement
v (mV)
to the ASSIGNED block, and replacing all other occurrences of "in" with v. Eliminates the need for a setpointer statement in hoc, and prevents misinitialization of the buffer from corrupting the "in" variable (but doesn't eliminate the need for buffer initialization in order to avoid forcing a large artifact in the "out" variable).
Post Reply