I was playing around with the "vdest.record(&var, Dt)" format for the Vector record method, and I recorded &t because I wasn't sure if this would be recording the values at some point a little before multiples of Dt, a little after, or right on the money. The resulting vector had a mixture of t values either being recorded right on the multiple of Dt, or one time step before. Dt was an integer multiple of dt. Is this the intended behavior? Are these actually the appropriate time values when variables would be recorded, so that when generating time vectors for graphs, I should use "tvec.record(&t, Dt)" instead of just using tvec.indgen(Dt)?
One reason it's important is that I'm using cvode.event() to update the recorded variable on the appropriate time steps, but it looks like the values are not always updated because sometimes the vector is actually recording on the timestep before the cvode.event() is delivered. Is it convoluted to be using cvode.event() to prepare the value for vector.record()? Should I maybe just use the function called by cvode.event() to write the values to the vector after it updates the value?
recording t
Re: recording t
P.S. Mac OSX 10.5.7
NEURON -- Release 7.0 (281:80827e3cd201) 80827e3cd201
NEURON -- Release 7.0 (281:80827e3cd201) 80827e3cd201
-
- Site Admin
- Posts: 6300
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: recording t
Sounds like roundoff error has reared its ugly head. I bet your dt lacks a finite representation in base 2.bhalterm wrote:The resulting vector had a mixture of t values either being recorded right on the multiple of Dt, or one time step before. Dt was an integer multiple of dt.
Appropriate or not, they are the actual times at which the variable would be recorded. When using Vector.record(), I generally also record t for future use.Are these actually the appropriate time values when variables would be recorded
I don't quite envision what this means. Can you send meOne reason it's important is that I'm using cvode.event() to update the recorded variable on the appropriate time steps, but it looks like the values are not always updated because sometimes the vector is actually recording on the timestep before the cvode.event() is delivered. Is it convoluted to be using cvode.event() to prepare the value for vector.record()? Should I maybe just use the function called by cvode.event() to write the values to the vector after it updates the value?
ted dot carnevale at yale dot edu
a compact bit of code that illustrates what you're doing?
Re: recording t
I've noticed some strange behavior: vector.record() doesn't save the value at the final time point when used with the run() command, but it does when used with fadvance().
For instance,records t up to (tstop - dt), whereas
records t up to and including tstop.
Is this intended? Could it have something to do with the issues in this thread? I can deal with it, but it's slightly annoying.
Thanks,
Erik
For instance,
Code: Select all
objref tvec
tvec = new Vector()
tvec.record(&t)
run()
Code: Select all
objref tvec
tvec = new Vector()
tvec.record(&t)
init()
while (t < tstop) fadvance()
Is this intended? Could it have something to do with the issues in this thread? I can deal with it, but it's slightly annoying.
Thanks,
Erik
-
- Site Admin
- Posts: 6300
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: recording t
Works fine for me. What are your tstop and dt (assuming you're using fixed time step)?eschombu wrote:I've noticed some strange behavior: vector.record() doesn't save the value at the final time point when used with the run() command, but it does when used with fadvance().
Code: Select all
Could it have something to do with the issues in this thread?
Re: recording t
Yes, now when I simply start nrngui and perform the actions I listed above, I do get the same behavior with run() and fadvance(). I am still seeing vec.record() not writing after the final time step in my simulations, but perhaps it is a specific problem caused by something in the (complicated) simulations. I thought I had tested the above simplified procedure before posting this reply, but perhaps I did something peculiar, like test it during the same NEURON session as one of the simulations. I'll look into it more when I get a chance and post an update if I can discover the issue. Thanks!
Re: recording t
Actually I encountered the same problem and also a bug with using while (t<tstop) {fadvance()}
I sent you an email with some lines of code which give this result.
I use a prerun, maybe eschombu used this, too?
I sent you an email with some lines of code which give this result.
I use a prerun, maybe eschombu used this, too?
-
- Site Admin
- Posts: 6300
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: recording t
MBeining's post and example code emailed to me brought my attention back to this thread, and now I think I understand what at least some of the earlier posts were about. None of this has anything to do with "prerun initialization." It's all about roundoff error that is an unavoidable consequence of finite precision floating point arithmetic. Just be mindful that all floating point calculations in NEURON, GENESIS, etc. use finite precision floating point arithmetic, and use a little care to write code that isn't critically vulnerable to roundoff error.
"What, computers do sloppy math?"
You bet. Example: 0.1 (and many other "nice" decimal numbers like 0.05, 0.025, 0.01--the list is endless) have no exact finite precision binary equivalent. Add up enough 0.1s and eventually you'll get a result that differs significantly from things you learned in school, like "0.1 times an integer that is a multiple of 10 produces a nice whole number." hoc's float_epsilon offers a workaround that is occasionally helpful (read about it here http://www.neuron.yale.edu/neuron/stati ... at_epsilon), but don't overuse it.
"What, computers do sloppy math?"
You bet. Example: 0.1 (and many other "nice" decimal numbers like 0.05, 0.025, 0.01--the list is endless) have no exact finite precision binary equivalent. Add up enough 0.1s and eventually you'll get a result that differs significantly from things you learned in school, like "0.1 times an integer that is a multiple of 10 produces a nice whole number." hoc's float_epsilon offers a workaround that is occasionally helpful (read about it here http://www.neuron.yale.edu/neuron/stati ... at_epsilon), but don't overuse it.