Page 1 of 1

MRF, dt, Brent's and error

Posted: Sat Mar 30, 2013 11:58 pm
by CCohen
Hi,

When running the MRF (let's say following tutorials 1 and 2). Curious about the relationship between dt (in RunControl) and the error of Brent's Praxis fitting?

Thanks.
-Charles

Re: MRF, dt, Brent's and error

Posted: Sun Mar 31, 2013 2:07 pm
by ted
You're asking an almost well-posed question. Praxis is merely a steepest descent strategy. It uses whatever error metric that one feeds it. Appropriate error metrics would be continuous (and probably mostly differentiable) functions of the paramters that are being optimized.

"OK, so let me ask it this way: how does dt affect the error metric used by Praxis in the MRF?"

Which error metric? The MRF offers the usual "sum of squared errors over a time interval," and this will scale with dt as you might expect: the more points contained in an interval, the larger the sum of squared errors. However, the MRF also offers "error in the time at which a spike occurs," which doesn't have a clear relationship to dt. Users can also specify their own error metrics, for which there may or may not be a necessary relationship to dt, such as "mean spike frequency over a time interval."

Re: MRF, dt, Brent's and error

Posted: Sun Mar 31, 2013 2:35 pm
by CCohen
Yes, well, I'm new at this deeper stuff, so pardon my inexperience.

OK so the general idea is to make dt as small as possible? Obviously much smaller than any time interval (of your curve being fitted). But how much smaller? Is there such a thing as "too small and bad"? For example, if using either of Euler's forward or backward methods, then there is such a thing as too small a dt (having to do with returning unstable solutions). Is that the case as well for Brent's Praxis as implemented in NEURON? Independent of whatever error metrics the user may input over above the basic Praxis implemention in NEURON.

Hopefully well-posed...?

Thanks!

Re: MRF, dt, Brent's and error

Posted: Sun Mar 31, 2013 4:29 pm
by ted
charles1 wrote:the general idea is to make dt as small as possible? Obviously much smaller than any time interval (of your curve being fitted). But how much smaller?
dt should be small enough so that simulation accuracy is acceptable. Acceptable accuracy depends partly on one's intent, and partly on the system that is being studied. Too small a dt and you're wasting your time, maybe even introducing error because of the finite precision of numerical integration. Too large a dt and you get inaccurate results or even instability

A completely empirical approach is often acceptable. Here's the algorithm:

Code: Select all

run a simulation using the default dt
call this the "control" result
Repeat
  dt <- dt/k (k might be 2, 5, 10, whatever you like)
  run a new simulation
  call this the "test" result
  if the control and test results are significantly different
    throw away the old control result and replace it by the test result
Until you don't see a significant difference between the control and test result
Use the dt that produced acceptable accuracy.