Page 1 of 1

Gradual change of synaptic weights for mimicking neuromod...

Posted: Mon Jun 12, 2017 11:49 pm
by rth
I want to model a gradual change in synaptic conductance due to changing in concentration of some neuromodulator. I have a list of connections. Each element of the list is a tuple of (NetCon, presynaptic index, postsynaptic indexes). Now I'm trying to use Vector.play, to change synaptic weight during a simulation. Here is the slightly simplified code:

Code: Select all

wmodT, wmodW = h.Vector(), h.Vector()
wmodT.from_python([500., 1500.])
wmodW.from_python([0.16e-2,0.16e-2*2.] )
for c,pre,post in connections:
	wmodW.play(c._ref_weight[0],wmodT,1)
I want to run this code with p-threads on desktop, but it returns an error:

Code: Select all

NEURON: We were unable to associate a PlayRecord item with a thread
 near line 0
 ^
        finitialize()
Traceback (most recent call last):
  File "network.py", line 1529, in <module>
    h.finitialize()
RuntimeError: hoc error
How I can work around this problem? I've tried to create multiple vectors for each NetCon, but it doesn't work too.

Thank you,
rth

Re: Gradual change of synaptic weights for mimicking neuromo

Posted: Sun Jun 18, 2017 3:42 pm
by hines
NEURON: We were unable to associate a PlayRecord item with a thread
From https://www.neuron.yale.edu/neuron/stat ... ector.play
the method signature
vsrc.play(point_process_object, var_reference, ...)
will take care of the problem since:
For the local variable timestep method, CVode.use_local_dt() and/or multiple threads, ParallelContext.nthread() , it is often helpful to provide specific information about which cell the var pointer is associated with by inserting as the first arg some POINT_PROCESS object which is located on the cell. This is necessary if the pointer is not a RANGE variable and is much more efficient if it is. The fixed step and global variable time step method do not need or use this information for the local step method but will use it for multiple threads. It is therefore a good idea to supply it if possible.

A vector instance can play into only one reference variable. So if there are a lot of them and only a few modulation instances, it may
be more efficient to modify the synapse so the NET_RECEIVE block multiplies the weight argument by a POINTER to one of a small number
of modulation values per thread (or else a function that returns the modulation value).

Re: Gradual change of synaptic weights for mimicking neuromo

Posted: Tue Jun 20, 2017 3:17 pm
by rth
A vector instance can play into only one reference variable. So if there are a lot of them and only a few modulation instances, it may
be more efficient to modify the synapse so the NET_RECEIVE block multiplies the weight argument by a POINTER to one of a small number
of modulation values per thread (or else a function that returns the modulation value).
Thank you, I got your idea. It should be a RANGE variable (say 'scale') which must be locally controlled by Vector.play, in modified Exp2Syn. For every NET_RECEIVE event, I'll multiply weight on this variable.