vector.play memory efficiency
Posted: Wed Jun 17, 2020 2:04 pm
I have a simulation where I need to play hundreds of vectors into synaptic conductances. These vectors are all just scaled versions of one another. Here is an outline of the code:
It is my understanding that in the code above, I must keep copies of all the vectors to be played (in this case, I store them in syn_veclist). The problem is that for long simulations with large numbers of neurons, this consumes *a lot* of RAM. Alternatively, I could use a callback that reads in the signal values and adjusts the synaptic conductances timestep-by-timestep, but this would slow down the simulation tremendously.
So here’s my question: is there any way to implement vector.play in such a way that I can just store one vector in memory, and apply scaled versions of it to numerous variables? Thanks for the help.
Code: Select all
signal=np.loadtxt(#some txt file that defines trace for synapses gmax)
syn_gmax_vec=h.Vector(signal)
syn_veclist = []
for cell in cellList:
norm_vec = syn_gmax_vec / cell.inDeg #divide gmax by number of connections projecting to this cell, so that total synaptic weight is the same for all cells irrespective of number of incoming connections
syn_veclist.append(norm_vec)
syn_veclist[-1].play(cell.synlist[0]._ref_gmax, h.dt)
So here’s my question: is there any way to implement vector.play in such a way that I can just store one vector in memory, and apply scaled versions of it to numerous variables? Thanks for the help.