Interesting question.
Corinne wrote:In order to scale up the small neuron, after loading it, I ran the following one line of code:
I then checked to make sure the surface area was as expected
It would also be a good idea to verify that
forall L*=k
where k is some constant
stretches the distances between pt3d points by the same scale factor, and that the discretized model has only been stretched lengthwise without affecting diameters. Easiest way to verify this is with a simple model, as done with this code:
Code: Select all
load_file("nrngui.hoc")
create soma, dend
access soma
soma {
pt3dclear()
pt3dadd(0,0,0, 1)
pt3dadd(10,0,0, 2)
}
dend {
pt3dclear()
pt3dadd(0,0,0, 1)
pt3dadd(0,10,0, 2)
pt3dadd(5,15,0, 3)
}
forall nseg*=3
proc report() { local i
forall {
print secname()
print "3d data"
for i = 0,n3d()-1 print x3d(i), y3d(i), z3d(i), diam3d(i)
print "discretized representation"
for (x,0) print x, x*L, diam(x)
}
}
{ finitialize() } // ensure geometric specification has been fully executed/updated
// curly brackets suppress printing of "1"
print "original lengths"
report()
print "-----"
print "after forall L*=10"
forall L*=10
{ finitialize() } // ensure geometric specification has been fully executed/updated
report()
quit()
What would you expect to happen? Does it indeed occur?
When I then initialized and ran the code I got some results that surprised me . . . because I think the activation potential of sodium should always be the same.
You mean you'd expect spike threshold to be unchanged. My guess is that most if not all experimentalists and modelers would have a similar expectation to yours.
But all observations are artefact until proven otherwise. Most surprises are merely revelations that one's intuition was incorrect.
So, I am wondering if something is amiss with my simple scaling method.
Healthy skepticism. One possibility is simply that forall L*=k does _not_ simply stretch sections lengthwise. What did the code I posted above suggest?
From your previous post, I see that nseg might need to be changed for size scaling larger than 10-20%.
Yes. Suggest you do the following:
First, rerun your tests on the original (unstretched) model, but make sure that the spatial grid is sufficiently fine by
forall nseg*=3
then doing another run, and repeating if you see a significant change in any of your measures of model performance (after all, you have no guarantee that the original model's discretization was fine enough for these new tests you are running). Make a note of the number of times you had to increase nseg by a factor of 3. Also note the effect that increasing nseg has on these measures.
Next stretch your model, and run another series of simulations, increasing nseg by a factor of 3 each time and noting what this does to simulation results.
What happened, and what might account for these results?
Next try a much simpler model: an axon comprised of two sections, each 1 um in diameter and at least 300 um long, with nseg = 1 for each. Make one passive, and put hh in the other (be sure to set e_pas to -65 mV). Attach an IClamp to their junction and apply a long depolarizing current. Adjust its amplitude to find the lowest frequency of repetitive spiking (let it run for at least 300 ms to make sure it fires repeatedly), then double the current amplitude so it's spiking at a somewhat higher rate. Finally, execute
forall nseg*=3
and see what happens. Repeat again and again to see how apparent excitability is affected by discretization.
Now what do you think is going on?