MRF between two curves

Using the Multiple Run Fitter, praxis, etc..
Post Reply
Sandrine

MRF between two curves

Post by Sandrine »

Hello Ted,

The NEURON course is far...and even if i did the two tutorials again, i can not find the way to do what i want with the MRF.

I have one single neuron (with several compartments), i put an IClamp (del=200ms, dur = 400ms and amp from 0.1 to 1 nA) and i have plotted the F-I curve (i saved I and F values in a .dat file). F is the frequency or the spike rate of my neuron.

Ok, and i have also experimental data (F-I relation for a neuron in response to current pulses (from 0.1 to 1 nA ), that i put in a .dat file also:

label:F-Irelation
10
0.1 0
0.2 2.5
0.3 85
.....


And what i would like to do is to optimize the parameters of my neuron (g_pas, E_pas, Ra, gmax_na, gmax_k) in order to match the two curves.

Fist question: is it possible to use the MRF to do the task?
Second question: where i'm blocked is when i have to enter the variable to fit. I want to enter an entire curve (from the first file) but i can't. Can i??

Thanks you in advance,

Sandrine.
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Optimizing a statistical descriptor with the MRF

Post by ted »

I think it's possible to use the Multiple Run Fitter for the task you describe, but the
problem must first be recast in a suitable way.

The principal use fo the MRF is to adjust model parameters so that some "first class
variable"* (a variable that appears in the equations that govern model behavior) follows
a desired trajectory. But this is not what you want to do. Unlike membrane potential,
ionic currents, conductances, concentrations, etc., firing frequency is not a "first class
variable"--it's merely a statistical descriptor of the abundance of spikes over a particular
time interval.

*--An aside:
There may be a mot juste for the concept that I'm trying to express, but I don't know
it. I just hope this is clear enough for you to get the idea.
Now back to the main line of argument.


So I would approach the problem from a very different perspective. I would approach it
as an optimization of a function called freq(), where freq() has the following operational
definition:
func freq() takes an argument i, which specifies the amplitude of the current stimulus,
and returns a value equal to the spike frequency elicited by i.

Now you can use the MRF as a function optimizer, to adjust the model's parameters
so as to match the shape of the freq(i) vs. i plot to whatever you like.

"So what does the code for func freq() look like?"

Maybe like this:

Code: Select all

func freq() {
  stim.i = $1
  run()
  return calc_freq()
}
where I assume that, after model setup, you have attached a NetCon to your cell and
use its record() method to capture spike times to a Vector called tvec().

All of the art goes into designing calc_freq(). The simplest calc_freq() would look like this:

Code: Select all

func calc_freq() { nn
  nn = tvec.size() - 1 // nn is number of interspike intervals
  if (nn<1) return -1 // if you don't have at least 2 spikes,
    // you can't calculate frequency, so return a bogus value
  return nn/(tvec.x[nn] - tvec.x[0]) // freq = # of ISIs / time interval
}
This simple implementation of calc_freq() is satisfactory if your cell fires at a constant
rate.

If firing shows adaptation or acceleration, you might want to wait for the ISI to converge
toward a constant value before determining the firing frequency. If you wanted to be
very clever, then instead of running the simulation for a fixed time, you could stop it
when firing frequency stabilizes. To do this, you'd have a second NetCon that uses its
record() method to call a procedure that checks each ISI and then stops the simulation
after you get, say, five ISIs in a row whose instantaneous frequency (1/ISI) satisfies a
variance criterion, e.g. less than 1% change from the first to the fifth.
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Post by ted »

As written, calc_freq() returns frequency in cycles per millisecond (i.e. kilohertz). To get
frequency in Hz, the return statement should be

Code: Select all

  return 1e3*nn/(tvec.x[nn] - tvec.x[0])
Sandrine

Post by Sandrine »

Thank you Ted, it works exactly as i wanted.

The only thing i didn't find how to control is the "dt" of my parameters.
In the "domain panel" i can control the low and high values of my parameters but how can i specify the step??

For example, for my paremter Ra, i would like to choose a step of 1 ( 180 < 200 < 220), or even more in order to test a very large domain (50 < Ra < 300) quickly. And once i have reduced the interval (if the error between the two curves decreases), i could choose a smaller step to find a more precise value.

Sandrine.
Sandrine

Post by Sandrine »

i think i found it.... the "randomize with factor" button seems to be an element!?
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Post by ted »

The only thing i didn't find how to control is the "dt" of my parameters.
Given y = f(x, p0, p1 . . .) where x is the independent variable and pi are parameters,
on each pass the MRF evaluates f once for each of the (x, y) points in the data set
to which you are tuning f. So if your data are firing rates recorded for stimulus currents
0.01 - 0.11 nA in increments of 0.2 nA, you'll have 6 data points, and the MRF will run 6
simulations (1 for each value of stimulus current) every time it evaluates the effect of new
parameters on the model's F-I curve.

Parameters are treated as continuous variables over a user-specifiable range. The time
required to optimize a function has nothing to do with this. Praxis does not evaluate f
over a grid of points in parameter space--it "hops" from point to point along a trajectory,
each step of which is governed by Brent's principal axis method (about which there are
dozens of papers--see
http://reference.wolfram.com/mathematic ... ethod.html
for a link to a big list).

Optimization time is governed by how long it takes to do a single evaluation of f,
how many (x, y) points are in the data to which you are fitting f,
and how many "hops" in parameter space are necessary for the error (difference
between f and the empirical data) to fall to an acceptable level.

"randomize with factor" has nothing to do with this; it perturbs the optimized parameters
from their current values. Useful if you think an optimization may have gotten stuck in a
local minimum.
Post Reply