### Unexpected additive phase delay

Posted:

**Tue May 09, 2017 4:39 am**We have a simple Ball-And-Stick model, soma and axon, in which we inject a noisy sinusoidal input to look at the output response of the soma. In doing this we want to explore the transfer function of the neuron. We look at the magnitude and phase of the output, where the magnitude is determined by the ability of the neuron to phase lock their firing rate to that of the input. To get both of these curves we use the circular statistics framework on spike timings, described by Ilin in 2013.

We notice that the phase of the output is non stationary, while in reality the phase, for any given input frequency, should be stationary in time. The effect of this non stationary behavior is that the magnitude displays a way smaller bandwidth than expected because of destructive interference between the first half of a simulation and the second half due to this non stationary phase. This will happen only for very specific input frequencies, while it is non dependent on the firing rate of the neuron.

We've already made sure that the neuron receives the correct input signal, both by a self written mod-file and creating the stimulus offline and playing it back with Vector.play(). The issue is also not model dependent since beside this Ball-and-Stick model we also have looked at a detailed model, obtained from a third party, which gives the same results. We use standard HH-equations which result in the expected shape of the spikes, FI-curve and dv/dt curve.

Our conclusion is that there is some kind of time delay, which increases with simulation length, on the level of spike timings which in turn causes the problems described above, though we have not implemented such a time delay since it is basic HH. We are using python NEURON. A simulation characteristically runs for 100 seconds simulation time. This delay presents itself on a timescale larger than 10 seconds.

We notice that the phase of the output is non stationary, while in reality the phase, for any given input frequency, should be stationary in time. The effect of this non stationary behavior is that the magnitude displays a way smaller bandwidth than expected because of destructive interference between the first half of a simulation and the second half due to this non stationary phase. This will happen only for very specific input frequencies, while it is non dependent on the firing rate of the neuron.

We've already made sure that the neuron receives the correct input signal, both by a self written mod-file and creating the stimulus offline and playing it back with Vector.play(). The issue is also not model dependent since beside this Ball-and-Stick model we also have looked at a detailed model, obtained from a third party, which gives the same results. We use standard HH-equations which result in the expected shape of the spikes, FI-curve and dv/dt curve.

Our conclusion is that there is some kind of time delay, which increases with simulation length, on the level of spike timings which in turn causes the problems described above, though we have not implemented such a time delay since it is basic HH. We are using python NEURON. A simulation characteristically runs for 100 seconds simulation time. This delay presents itself on a timescale larger than 10 seconds.