passive fit not possible

Managing anatomically complex model cells with the CellBuilder. Importing morphometric data with NEURON's Import3D tool or Robert Cannon's CVAPP. Where to find detailed morphometric data.
Post Reply
michael.stegen

passive fit not possible

Post by michael.stegen »

Hey all,

I´m trying to find the passive membrane properties of detailled reconstruced neurons. The spines are modelled explicitly.
We use normal cells and cells of an epilepsy model. The epilepsy cells have half of the Rin and taum compared to normal cells.
When I´m trying my fitting routines on normal cells I can reproduce the already published values, but in the epileptic cells I am not able to fit the charging curves with feasible values. In the normal cells I find Ra ~ 300, cm ~1 and g_pas ~ 3e-5; in the others Ra ~ 600, cm ~ 2, g_pas ~ 7e-5.
I checked the surfaces very often, these are correct.
I tried then to distribute the passive leak conductance with a sigmoidal shape along the neuron. But know my fitting routine doesn´t work anymore. The program inserts negative conductances, then I get strange oscilliating charging curves and then the program stops. The error which is printed is really small, but the found curve doesn´t fit the experimental at all.
Here my questions:
- What about cm values larger than 1 (see Santos-Sacchi 2001)?
- Is it possible that I don´t find a passive model, because the currents are not passive (which I checked, I can superimpose all hyper- and depolarized curves)?
- Is my routine working right?

Would be great to get some answers to my questions, thanks in advance, Micha

Here my code:

Code: Select all

func error() {local i,k, dendnr, spinenr, d, error localobj vec, SaveVoltage, datafile, voltages

    vec = new Vector()
    vec.record(&soma.v(.5))
    SaveVoltage = new File()
    SaveVoltage.wopen("254a_Voltage.dat")			//open file for writing
    
    dt = 1
    CC.del = 0
    CC.dur = 100
    CC.amp = 0.01
    tstop = 100
 
    forall {
    	insert pas
	Ra = 200 + $&2[1] * 10
	cm = 1 + $&2[2] * 0.01
	e_pas = 0
    }
    
access soma
    g_pas = $&2[4]
    distance()
    dendnr = 0
    for dendnr = 0, 41 {
      access apic[dendnr]
      k = 0
      for k = 1, nseg {
         apicloc = ((2*k-1)/(2*nseg))
         d=distance(apicloc)
         g_pas(apicloc) = $&2[4] + (($&2[5] - $&2[4])/(1+exp(($&2[3]-d)/($&2[0]))))
      }
    }
    spinenr = 0
    for spinenr = 0, 1218 {
       access spines[spinenr]
       d=distance(0.5)
       g_pas(0.5) = $&2[4] + (($&2[5] - $&2[4])/(1+exp(($&2[3]-d)/($&2[0]))))
       access neck[spinenr]
       g_pas(0.5) = $&2[4] + (($&2[5] - $&2[4])/(1+exp(($&2[3]-d)/($&2[0]))))
    }
    access soma
    init()
    run()
    vec.printf(SaveVoltage)					//writes values of the vector vec to file SaveVoltage
    SaveVoltage.close()
    
    voltages = new Vector()
    datafile = new File()                			//original dataset; voltages, dt = 20 kHz
    datafile.ropen("254a_long_pulse_average.txt")
    voltages.scanf(datafile)
    datafile.close()
    
    start_voltage_exp = voltages.x[2000]
    end_file_exp = voltages.size()
    for i = 0, end_file_exp-1 {
       voltages.x[i] = ((voltages.x[i] - start_voltage_exp))//*(1000)//normalizing data; don´t forget faktors for mv and neg currents!!!!!!
    }
    error = 0
    for i= 0, 100 -1  
    error += (voltages.x[20*i+2000]-vec.x[i])*(voltages.x[20*i+2000]-vec.x[i])    //factors due to samplerate      
    }
    return error
}

n_p = 6
double params[n_p]

proc fit() {local n_p
    params[0] = -1.25687
    params[1] = 18.8636
    params[2] = 50.5069
    params[3] = 81.5979
    params[4] = 1e-5
    params[5] = 10e-5
    attr_praxis(0.001,0.1,3)
    fit_praxis(n_p, "error", &params[0])
}
ted
Site Admin
Posts: 6300
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: passive fit not possible

Post by ted »

I checked the surfaces very often
What surfaces?
my fitting routine doesn´t work anymore. The program inserts negative conductances
The method of parameterization is at fault. Meaningful values of Ra, cm, and g_pas are positive definite, so writing
Ra = 200 + a*10
cm = 1 + b*0.01
and then letting an optimizer have free rein over a and b risks nonsense results. Better to abandon these expressions for Ra and cm, and instead assert that Ra, cm, and g_pas are all positive definite. It would also be good to tell the optimizer to use logarithmic scaling.

Other comments:

If you are using the standard run system (either by invoking NEURON with nrngui on the command line, or by a load_file("nrngui.hoc") statement elsewhere in your program), the assignment
dt = 1
will have no effect--dt will be reset to the default 0.025. To verify this for yourself, start nrngui, then at the oc> prompt type
dt = 1
then type
run()
hoc will reply
Changed dt
and if you now type
dt
at the oc> prompt, hoc will reply
0.025
This happens because run() calls the standard run system procedure setdt(), which forces dt to be compatible with a simulation parameter called steps_per_ms. steps_per_ms controls the intervals at which points are plotted. setdt() ensures that dt = 1/(N*steps_per_ms) where N = 1, 2, 3 . . .
In general, to change dt your code should employ the idiom

Code: Select all

dt = whatever value you like
steps_per_ms = 1/dt // or steps_per_ms = 1/dt/N if you prefer 1 point per N fadvances
Now that you know how to change dt to 1, I advise you not to. Why? Because the early part of the charging curve is the part that contains the most information about Ra. Throw that out, and you may be discarding the very data that will best constrain Ra. Of course, if it is contaminated by electrode artifact, it's garbage so ignore my caveat.

Suggestions to improve code readability, ease of debugging, and maintainability:

func error() contains many statements that only have to be executed once, e.g. the declaration of vec and its record() statement, forall { insert pas e_pas = 0 }, soma distance(). For faster optimization, you might want to factor these out, placing them either at the top level, or preferably in a separate proc that is called just before you start optimizing.

What is the practical benefit of writing a new file on every call to func error()? Doesn't this slow down the optimization process tremendously?

Why read file 254a_long_pulse_average.txt on every pass through func error()? This is another performance bottleneck. If these are the data against which you are going to compare simulation results, why not factor this code into a separate proc that you call just once, prior to entering the optimization loop?

Too many access statements, easy to become confused about which is the current section. For code clarity it is better to use section stack notation, e.g.
soma g_pas = whatever

Instead of embedding statements with "magic numbers"
for dendnr = 0, 41
for spinenr = 0, 1218
deep in func error() to iterate over relevant sections, use SectionLists. After the model's topology has been set up,

Code: Select all

NUMAPICALS = 42
NUMSPINES = 1219
objref apicallist, spinelist, necklist
apicallist = new SectionList()
for i = 0, NUMAPICALS-1 apic[i] apicallist.apppend()
spinelist = new SectionList()
necklist = new SectionList()
for i = 0, NUMSPINES-1 {
  spines[i] spinelist.apppend()
  neck[i] necklist.apppend()
}
Then

Code: Select all

for dendnr = 0, 41 {
  access apic[dendnr]
  . . .
becomes

Code: Select all

forsec apicallist {
  . . .
etc.

Instead of

Code: Select all

  k = 0
  for k = 1, nseg {
    apicloc = ((2*k-1)/(2*nseg))
use

Code: Select all

  for (x,0) { // iterate over all internal nodes
    apicloc = ((2*x-1)/(2*nseg))
Positional pointer argument syntax ($&2[4] etc.) is essentially unreadable. Instead, at the top of func error(), assign these values to local variables that are more mnemonic.

Code: Select all

func error() { local k, pra, pcm, d0, gpas0, . . . etc. . . .
  k = $&2[0] // parameter that governs Ra
  pra = $&2[1] // parameter that governs Ra
  pcm = $&2[2]
  d0 = $&2[3]
  g0 = $&2[4]
  gmax = $&2[5] // or whatever you choose to call this one
  . . .
  forall {
    Ra = 200 + pra * 10
    cm = 1 + pcm * 0.01
  }
  soma g_pas = g0
  . . .
With symbolic replacement, and the use of for (x,0) to iterate over all internal nodes of the current section,
the parameterized expression that governs g_pas becomes

Code: Select all

g_pas(x) = g0 + ( (gmax - g0) / ( 1+exp( (d0 - d)/k) ) )
As pretty as this is, you should probably factor it out into a separate function, instead of embedding it at multiple places inside func error().

Code: Select all

func sigmoid() { local min, max, x0, x, k
  min = $0
  max = $1
  x0 = $2
  x = $3
  k = $4
  return min + ( (max - min) / ( 1+exp( (x0 - x)/k) ) )
}
Then

Code: Select all

  for dendnr = 0, 41 {
    . . .
becomes

Code: Select all

  forsec apicallist {
    for (x,0) g_pas(x) = sigmoid(g0, gmax, d0, distance(x), k)
  }
etc.
michael.stegen

Re: passive fit not possible

Post by michael.stegen »

Thanks, I see, there is a lot to learn... I will build in all suggestions. But beside the bad programming style: Does the program distribute g_pas in a sigmoidal shape all over the neuron?

best regards, Micha
ted
Site Admin
Posts: 6300
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: passive fit not possible

Post by ted »

michael.stegen wrote:Does the program distribute g_pas in a sigmoidal shape all over the neuron?
Raises two questions.

1. Will g_pas in all sections be affected.
Only if the code iterates over all segments in all sections.
If you're going to do that, there's no need to iterate separately over three different SectionLists. No need for SectionLists at all. Just

Code: Select all

forall for (x,0) {
  statement that assigns desired value to g_pas(x) 
}
2, Will g_pas ever approach the max or min value in any segment.
Not necessarily. Depends on the geometry of the cell and the choice of k and d0. You could toil long and hard trying to do this right, but in this situation you're probably better off taking advantage of the GUI's ability to generate hoc code that you can then reuse for your own purposes. Get your model cell into the CellBuilder, then use its "parameterized subsets" feature to specify the exact rule you want to govern the spatial variation of channel density. Then export the model to a hoc file. You will find that, along with a lot of other stuff, the hoc file contains

Code: Select all

proc celldef() {
  topol()
  subsets()
  geom()
  biophys()
  geom_nseg()
  biophys_inhomo()
}
Look for proc biophys_inhomo() and you'll see that it calls another proc that contains a formula that asserts the rule that governs your channel density. For example, suppose you have a cell model with a subset of neurites called "apicals" in which sodium channel density declines linearly with path distance from the soma, reaching 0 at the very most distal termination in that subset. Export that to a hoc file called cell.hoc, and in this file you will find a proc biophys_inhomo() that looks like this:

Code: Select all

proc biophys_inhomo() {
  // Path Length from root translated so most proximal end at 0
  //   and normalized so most distal end at 1 ranges from 0 to 1
  apicals_x = new SubsetDomainIterator(apicals, 0, 1, 1)
  gnabar_hh_apicals_x()
}
and there is a proc called gnabar_hh_apicals_x() that looks like this:

Code: Select all

proc gnabar_hh_apicals_x() {local x, p, p0, p1, b, m
  apicals_x.update()
  p0 = apicals_x.p0  p1 = apicals_x.p1
  b = 0.12
  m = -0.12
  for apicals_x.loop() {
    x = apicals_x.x  p = apicals_x.p
    gnabar_hh(x) = b + m*p/(p1 - p0)
  }
}
Suppose you wanted b and m to be under program control. Revise this proc by commenting out the stuff you don't like and entering new stuff that you do like:

Code: Select all

// proc gnabar_hh_apicals_x() {local x, p, p0, p1, b, m
my_b = 0.12 // make visible at the top level with unique names
my_m = -0.12 // and assign "reasonable" starting values
proc gnabar_hh_apicals_x() {local x, p, p0, p1
  apicals_x.update()
  p0 = apicals_x.p0  p1 = apicals_x.p1
//  b = 0.12
//  m = -0.12
  for apicals_x.loop() {
    x = apicals_x.x  p = apicals_x.p
//    gnabar_hh(x) = b + m*p/(p1 - p0)
    gnabar_hh(x) = my_b + my_m*p/(p1 - p0)
  }
}
Better for this case might be

Code: Select all

my_b = 0.12 // make visible to hoc with unique names
my_m = -my_b // and ensure "reasonable" starting values
which guarantees that gnabar_hh {falls to 0 at the most distal point } and { is never < 0 }.

You can then combine the revised cell.hoc with your own user interface, simulation control, and analysis code, e.g. by writing an init.hoc that contains statements like these:
load_file("nrngui.hoc")
load_file("revised_cell.hoc")
. . . more load_file statements to read whatever other hoc or ses files you need . . .

Check out the CellBuilder tutorial at http://www.neuron.yale.edu/neuron/docs
With the CellBuilder you can stipulate that any range variable parameter is some function f(p) where p is one of these three metrics:
path length from root
euclidian distance from root
distance from a plane ("3D projection onto line" where "line" is the normal to the plane)
The function f can be anything you like. Built in are Boltzmann (sigmoid), ramp (linear), and exponential, but you can enter your own functional form.

"Now, there's the news! There's the big a-ha!"
michael.stegen

Re: passive fit not possible

Post by michael.stegen »

Yes it is... Thanks a lot!
Post Reply