Adding randomness to NetCon "delay"

Anything that doesn't fit elsewhere.
Post Reply
iamdescartes
Posts: 4
Joined: Wed Jul 25, 2012 1:31 pm

Adding randomness to NetCon "delay"

Post by iamdescartes » Fri Mar 29, 2013 2:23 pm

Hi all,

I am wondering whether it's possible to add some randomness to NetCon's delay parameter. Right now, I have cell A receiving excitation from cell B and the delay parameter lets me set the latency between when cell B's membrane potential crosses threshold and when cell A receives the excitation. But I want there to be a inter-trial jitter in that latency - i.e. every time cell B potential crosses threshold, it would randomly pick from a range of possible latency values and excite cell A after that latency (and the next time it's triggered, it would have a different latency, etc.).

I hope that made sense. Please let me know if you would like any more information or clarification and thanks so much for your help!

A

ted
Site Admin
Posts: 5601
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Adding randomness to NetCon "delay"

Post by ted » Fri Mar 29, 2013 5:15 pm

Is cell B an artificial spiking cell, or is it a biophysical model cell i.e. a model implemented with sections that integrates "real" synaptic currents? Does B project to just one target synapse on one other cell, or does it project to multiple synaptic mechanisms? If the latter, should all of the elicited postsynaptic responses show jitter, and if yes, should they all show the same jitter or should it be different? i.e. is the jitter in cell B's spike time (as it would be if B receives synaptic background "noise"), or is the jitter intrinsic to the synaptic mechanisms themselves (as it would be if, say, vesicle docking and release varied from one presynaptic spike to the next).

iamdescartes
Posts: 4
Joined: Wed Jul 25, 2012 1:31 pm

Re: Adding randomness to NetCon "delay"

Post by iamdescartes » Mon Apr 01, 2013 11:39 am

Hi,

Thanks for the rapid response. To answer your questions,
ted wrote:Is cell B an artificial spiking cell, or is it a biophysical model cell i.e. a model implemented with sections that integrates "real" synaptic currents?
Ooops. I misspoke when I wrote the original question - cell B is an artificial spiking cell (a NetStim), so there's no "membrane potential crossing threshold"
Cell A is a single compartment biophysical model with real synaptic currents.
ted wrote:Does B project to just one target synapse on one other cell, or does it project to multiple synaptic mechanisms?
Cell B projects onto A and another cell (cell C - it's another single compartment biophysical cell)
Both of these synapses use NetCon
ted wrote:If the latter, should all of the elicited postsynaptic responses show jitter, and if yes, should they all show the same jitter or should it be different?
Yes, I would like to have the responses in both A and C show jitter and ideally have different jitters (independent of each other).

Just to clarify a little bit further:
Imagine that cells A and C are located far away from each other physically. Cell B drives both of these cells, but it's very close to A (and thus, far from B). So when B fires, A experiences its postsynaptic event before C does (let's say, 3ms vs 13ms). But, there's also a jitter across multiple B->A events and also a jitter across multiple B->C events (and these are not identical jitters). e.g. when B fires at t1, A responds at t1+2ms, and C responds at t1+15ms, then when B fires again at t2, A responds at t2+4ms, and C responds at t2+11ms, etc... (i.e. C always responds later than A, the jitter in the two synapses are independent).

I would ideally want to specify a range of values for the B->A delay (e.g. 2ms to 4ms at 0.1ms intervals) and a different range of values for the B->C delay (e.g. 11ms to 15ms at 0.1ms intervals) and these two events would independently sample from this distribution to determine each time a synaptic event occurs.

Thanks again for your help and please ask me further questions if anything doesn't make sense. Thanks!

A

ted
Site Admin
Posts: 5601
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Adding randomness to NetCon "delay"

Post by ted » Tue Apr 02, 2013 11:53 am

Forget about the 0.1 ms interval stuff--it introduces an unnecessary complication that isn't helpful for the implementation.

Start by thinking about a spike source "src" that projects to a single target "tgt" with fixed delay "del"

Code: Select all

   nc
src-->tgt
src generates events at times ti.
NetCon nc delivers events to tgt at times ti* = ti+nc.delay, where nc.delay = del.

A simple, direct way to implement jitter would be to interpose an artificial spiking cell "mid" between src and tgt.

Code: Select all

   nc0   nc1
src-->mid-->tgt
mid would be an instance of a new class of artificial spiking cell that responds to an input event by launching a self event that returns after latency tjit. The value of tjit can be drawn from a pseudorandom number generator ("rng")--uniform distribution over a range [a,b] where a>0 makes the most sense, but negexp would also be possible. Arrival of a self event makes mid generate an output event that can be conveyed by a NetCon to a target. Total latency from src to tgt is
nc0.delay + tjit + nc1.delay
If a model that uses this strategy is ever parallelized, for the sake of efficiency one should make sure that nc0.delay = nc1.delay so that each processor can execute for as long as possible before spike exchange is necessary; a similar consideration applies for mutithreaded execution.

A side comment: the "weight" of the projection from src to tgt is specified by nc1.weight; nc0.weight is irrelevant.

Your model network would then look like this:
A <-- midA <-- B --> midC --> C

To ensure statistical independence of the jitter along each path (important for reproducibility and debugging), each mid* cell could be paired with its own pseudorandom sequence generator instance.

A possible alternative implementation might be to eliminate the "mid" cell and build the jitter into tgt itself. Each input event would make tgt launch a self event that returns after a latency drawn from a rng. This requires only one NetCon per src->tgt path, reduces the size of the event queue, and allows longer run times between spike exchanges. With regard to overhead for dealing with events per se, this seems marginally superior to using a "mid" cell. However, the savings is illusory--what this approach saves in one area it loses in another, by limiting each tgt instance to receiving just one input stream; if two or more input streams converge on a single tgt, I don't see how to preserve statistical independence of their jitter. This imposes a big performance hit if tgt is a biophysical synaptic mechanism governed by one or more ODEs or kinetic reactions, because each new equation that requires numerical integration is much more costly than putting another event in the event queue. And if tgt is an artificial spiking cell, and each jittered stream requires its own tgt instance, with its own self events and output events that must be managed, where's the savings in that?

iamdescartes
Posts: 4
Joined: Wed Jul 25, 2012 1:31 pm

Re: Adding randomness to NetCon "delay"

Post by iamdescartes » Wed Apr 03, 2013 11:36 am

Hi,

Thanks for the help! The implementation using 'mid' sounds like it would work perfectly for my purposes. I'll work on the new class and get back to you if things don't work out. Thanks a lot!

A

ted
Site Admin
Posts: 5601
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Adding randomness to NetCon "delay"

Post by ted » Thu Apr 04, 2013 10:57 am

Getting the code right can be tricky. Don't hesitate to ask if you have questions.

Post Reply