undersampling curves

NMODL and the Channel Builder.
Post Reply
thitsa
Posts: 10
Joined: Wed Aug 14, 2019 5:09 am
Location: France

undersampling curves

Post by thitsa »

I have experimental traces of Ca fluorescent imaging that are sampled at 0.5 ms.
I want to fit these recordings with my model but to do that I need to undersample the simulated cai.
For example if my simulation dt is 0.1 ms and the duration is 8 ms, in matlab I would write

y=cai(1:80)
n=1
i=1
while n<77
und(i)=sum(y(n:n+4))
n=n+5
i=i+1
end

How can I get an undersampled cai in NEURON so that then I can use the optimization tool to match my experiments?
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: undersampling curves

Post by ted »

A good question about a common practical issue. In order to provide the most appropriate answer, I'll need a bit more information.
I have experimental traces of Ca fluorescent imaging that are sampled at 0.5 ms.
I want to fit these recordings with my model but to do that I need to undersample the simulated cai.
For example if my simulation dt is 0.1 ms
So the experimental data captures an estimate of calcium concentration at 0.5 ms intervals. But the matlab code in your post contains this statement

und(i)=sum(y(n:n+4))

which looks like it's adding up simulated cai over 0.5 ms intervals (roughly equivalent to integrating simulated cai over 0.5 ms). Or did you intend to calculate the average simulated cai, but forgot to divide by 5?

But why would it be necessary or desirable to average simulated cai over 0.5 ms intervals? Why not just pick the individual simulated cai values that correpond to integer multiples of 0.5 ms?

Finally, is your model implemented in Python or in hoc, and what is the name of the variable (presumably a vector) that holds the simulated time course of calcium concentration?
thitsa
Posts: 10
Joined: Wed Aug 14, 2019 5:09 am
Location: France

Re: undersampling curves

Post by thitsa »

Yes it was indeed supposed to be
und(i)=sum(y(n:n+4))/5
But why would it be necessary or desirable to average simulated cai over 0.5 ms intervals? Why not just pick the individual simulated cai values that correpond to integer multiples of 0.5 ms?
It is not the same, during the experiment I dont measure the concentration every O.5 ms but rather during O.5 ms so what I get in my recordings is an average concentration during this time. If I simply take the cai in multiples of 0.5 ms I would end up with almost the same curve (just kind of sharp edged). Whereas, when undersampling the curve looks different than the original, the kinetics seem slower although it is just a matter a undersampling.

If I change the kinetics parameters of the Ca channel in the mod file using the optimization tool to fit my experimental recording with the simulated cai I will get a false result. I have to fit my experimental recording with an undersampled cai.
Finally, is your model implemented in Python or in hoc, and what is the name of the variable (presumably a vector) that holds the simulated time course of calcium concentration?
I use hoc. So far I havent named it, I simply plot dend.cai(0.5), I have a single compartment model.
I am also not sure if I should be doing this calculation in my main hoc or in a mod file, considering that this is the variable to be fitted in the optimization.
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: undersampling curves

Post by ted »

Thanks for the explanation. Please excuse me if the following discussion seems rather basic, but we try to be mindful of the fact that not everyone who reads these comments will have the same level of expertise.
I am also not sure if I should be doing this calculation in my main hoc or in a mod file, considering that this is the variable to be fitted in the optimization.
This is a simple task for NEURON's Vector class, which has many methods that are easy to use and computationally efficient (because they're performed by compiled code). No need to resort to NMODL, which would probably require a VERBATIM block to accomplish the same task; for many reasons it's almost never a good idea to resort to VERBATIM blocks.

Optimization typically involves a workflow similar to this:

Code: Select all

set up a model
repeat
  set parameters to desired values
  run a simulation
  calculate a penalty function (AKA measure of error, 
    usually by comparing simulation results
    against some desired standard)
  decide what parameters should be changed, and by how much
until the error is sufficiently small
What you need to do is
1. use the Vector class's record() method to capture the time course of the variable of interest. Do this by executing this code after model setup is complete, but before entering the repeat . . . until loop:

Code: Select all

objref caivec, tvec
tvec = new Vector() // always a good idea to capture t as well
tvec.record(&t)
caivec = new Vector()
dend caivec.record(&cai(0.5))
2. modify the repeat . . . until loop so that undersampling is performed after every run but before the penalty function is calculated. The Vector class's rebin() method will do this nicely. Insert this code right after the simulation is executed:

Code: Select all

objref cax
cax = caivec.c
cax.rebin(5) // elements are sums of caivec's elements 0-4, 5-9, etc.
cax.div(5)
If you want the first element of cax to be the value of cai at t==0, and the subsequent elements to be the averages of cai's elements 1-5, 6-10 etc., insert this instead:

Code: Select all

objref cax
cax = caivec.c(1) // omit the value of cai at t == 0
cax.rebin(5) // elements are sums of caivec's elements 1-5, 6-10 etc.
cax.div(5)
cax.append(caivec.x[0])
cax.rotate(1) // first element is cai at t == 0
Be sure to read the Programmer's Reference documentation of these Vector class methods.
thitsa
Posts: 10
Joined: Wed Aug 14, 2019 5:09 am
Location: France

Re: undersampling curves

Post by thitsa »

Thank you! That's exactly what I need and seems very easy
thitsa
Posts: 10
Joined: Wed Aug 14, 2019 5:09 am
Location: France

Re: undersampling curves

Post by thitsa »

Code: Select all

load_file("nrngui.hoc")

//set the compartment
create dend
dend{
  nseg = 1
  L = 5
  diam = 2

  insert ca12 
}

access dend

//constants
celsius = 37
dt = 0.1
steps_per_ms = 1/dt
tstop = 8
v_init = -80

//import experimental input
objref pulse1
objref InjectMatrix

objref InjectFile,  voltage
InjectFile = new File("v.dat")
InjectFile.ropen()

InjectMatrix = new Matrix()
InjectMatrix.scanf(InjectFile,80,2)
voltage = new Vector(80)
for(i=0; i<voltage.size; i = i+1){
	if(i>=InjectMatrix.nrow){
		voltage.x[i] = voltage.x[i-1]
	}else{
		voltage.x[i]=InjectMatrix.x[i][1]
	}
}

 ////stimulate with experimental AP
proc init_VClamp() {
        access dend
        pulse1 = new SEClamp(0.5)
        pulse1.dur1 = tstop
        voltage.play(&pulse1.amp1,dt)
}

init_VClamp()

//record the cai
objref caivec, tvec
tvec = new Vector() // always a good idea to capture t as well
tvec.record(&t)
caivec = new Vector()
dend caivec.record(&cai(0.5))

//compare with experimental output and optimize
xopen("fit.ses")

run()

//undersample cai
objref cax
cax = caivec.c(1) // omit the value of cai at t == 0
cax.rebin(5) // elements are sums of caivec's elements 1-5, 6-10 etc.
cax.div(5)
cax.append(caivec.x[0])
cax.rotate(1) // first element is cai at t == 0

What I am doing is wrong but I cant understand how. The optimizer only recognizes variables of the type dend.variable(),or cax.x[80] but cax.x[all] cannot be used as variable to fit.
thitsa
Posts: 10
Joined: Wed Aug 14, 2019 5:09 am
Location: France

Re: undersampling curves

Post by thitsa »

Code: Select all

TITLE undersampling of cai

COMMENT
	undersampling of the simulated cai to match the experimental speed 
ENDCOMMENT


NEURON {
        SUFFIX undercai     
	USEION ca READ cai
	
    RANGE cau,b, caund,tr
 
}

PARAMETER {
tSta=0.5 
}


ASSIGNED {
 cai (milli/liter)
 cau (milli/liter)
 caund (milli/liter)
 b
 tr 
 plp

}


INITIAL {

	b = 1
 plp=0.5
	cau = cai
	caund = 0



}


BREAKPOINT {
VERBATIM

plp=b*tSta;
caund=caund+cai;
tr=t;


if ( (tr < plp+0.05) && (tr > plp-0.05) ) {
	cau=caund/5;
caund=0;
b=b+1;

}

ENDVERBATIM
}

In the end I resorted to a VERBATIM block, if anyone dealing with the same problem. I get the same result as when using the Vector class methods variable (cax, described before) so it seems to be working. I am not experienced with NEURON so let me know if you spot anything wrong.
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: undersampling curves

Post by ted »

I begin to see your problem. I thought you were working with some generic optimization library that you could just feed some postprocessed data after a simulation was complete. Instead, you're probably using

Code: Select all

xopen("fit.ses")
to recreate a Multiple Run Fitter, right?
thitsa
Posts: 10
Joined: Wed Aug 14, 2019 5:09 am
Location: France

Re: undersampling curves

Post by thitsa »

Yes, I am using a Multiple Run Fitter, sorry I didn't specify that. With the VERBATIM seems to be working fine.
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: undersampling curves

Post by ted »

This note is primarily for others who may read this thread.

Using NMODL for "data processing on the fly" raises several potential issues.

1. In this particular application, the sequence of execution of NMODL-specified mechanisms is important. Clearly all mechanisms that WRITE ica must be executed before undercai is executed (otherwise some currents may have values from the current time step while others will have values from the previous time step). The default execution sequence follows alphabetical order by mechanism names. Since this particular mechanism's name is "undercai", the programmer must make sure that all mechanisms that WRITE ica have names that appear before "undercai" in an alphabetized list.

2. undercai will produce correct results only if (1) fixed time step integration is used, and (2) dt is 0.1 ms. It is guaranteed to produce incorrect results if dt has some other value or if adaptive integration is used. This will be a problem for those models that require small dt or adaptive integration to achieve stability or sufficient accuracy. Minor revisions to the NMODL code (e.g. replacing embedded magic numbers with PARAMETERs) would allow users to use undercai with other values of dt and calculate averages over different intervals without having to re-edit and recompile the NMODL code, but reconciling dt with the number of samples to be averaged would be a task left to the end-user.

3. Instead of using a conditional statement to check the current value of t, it would be simpler (and automatically avoid roundoff problems) to just keep a count of how many cai values have been added; then when the count reaches the desired value, calculate the average and reset the count to 0.


An alternative to "data processing on the fly" would be to recast the problem as a function fitting task that compares the values of a function f() with the experimentally observed values and uses that error measure to adjust the model parameters. This would make the problem tractable for a Function Fitness generator controlled by the MRF. Properly done, it would be able to handle simulations that use any fixed time step, or even adaptive integration. And it would not require any NMODL code with VERBATIM blocks.

All the magic would be in the function f(), which in pseudocode would look like this:

Code: Select all

// expects one argument: a time value that is equal to 
//   one of the times at which the experimental data were sampled
// assumes the existence of two Vectors: tvec and caivec
//   which are used to hold the desired sample times
//   and the model-generated corresponding values of cai
// on first entry, caivec may contain all zeroes
func f() { localobj tmpcaivec, tmptvec
  if the model's parameters have been changed {
    run a simulation that captures the time course of cai and t to tmpcaivec and tmptvec 
    use tmpcaivec and tmptvec to fill caivec with values 
      that are averaged and sampled at the times specified by tvec
  }
  return the value in caivec that corresponds to argument $1
}
One could even do the cai averaging with an NMODL-based mechanism that (1) calculates a proper integral of cai between sampling times and (2) works even with adaptive integration.

While this would be quite elegant and robust, it would require extra programming. That might be justified depending on the project; for the present example, the "processing on the fly" approach does the job well and is safe in the hands of its developer.
Post Reply