The exercises here are intended primarily to familiarize the student with techniques and tools useful for the implementation of networks in NEURON. We have chosen a relatively simple network, very loosely based on Hopfield-Brody, in order to minimize distractions due to the complexities of channel kinetics, dendritic trees, detailed network architecture, etc. The model is described in two files: net.py and intfire.mod.
The core of the network consists of artificial integrate-and-fire cells without channels or compartments. This is implemented using an ARTIFICIAL_CELL defined in intfire.mod and wrapped in the Cell class in net.py. Within the core network, there is only one kind of cell, so there are no issues of organizating interactions between cell populations. All synapses within the core network are inhibitory. (Hopfield-Brody, by contast, uses a mix of inhibitory and excitatory cells).
A single additional cell with Hodgkin-Huxley dynamics, receiving input from all the integrate-and-fire cells, is used as a way to measure network synchrony (it fires when it receives enough inputs within a narrow enough time window).
As you know, NEURON is optimized to handle the complex channel and compartment simulations that have been omitted from this exercise. The interested student might wish to convert this network into a network of spiking cells with realistic inhibitory interactions or a hybrid network with both realistic and artificial cells. Such an extended exercise would more clearly demonstrate NEURON's advantages for performing network simulations.
Although this is a minimal model, learning the ropes is still difficult. Therefore, we suggest that you go through the entire lesson relatively quickly before returning to delve more deeply into the exercises. Some of the exercises are really more homework projects.
The basic intfire implementation in neuron utilizes a decaying state variable (m as a stand-in for voltage) which is pushed up by the arrival of an excitatory input or down by the arrival of an inhibitory input (m = m + w). When m exceeds threshold the cell "fires," sending events to other connected cells.
if (m>1) { ... net_event(t) : trigger synapses
The integrate-and-fire neuron in the current model must fire spontaneously with no input, as well as firing when a threshold is reached. This is implemented by utilizing a firetime() routine to calculate when the state variable m will reach threshold assuming no other inputs during that time. This firing time is calculated based on the natural firing interval of the cell (invl) and the time constant for state variable decay (tau). When an input comes in, a new firetime is calculated after taking into account the synaptic input (m = m + w) which perturbs the state variable's trajectory towards threshold.
IntIbFire is wrapped by the class Cell. An instantiation of this class provides access to the underlying mechanism through its dynamics property.
The network has all-to-all inhibitory connectivity with all connections set to equal non-positive values (initially 0). The network is initially set up with fast firing cells at the bottom of the graph (Cell[0], highest natural interval) and slower cells at the top (Cell[ncell-1], lowest natural interval). Cells in between have sequential evenly-spaced periods.
The synchronization mechanism requires that all of the cells fire spontaneously at similar frequencies. It is obvious that if all cells are started at the same time, they will still be roughly synchronous after one cycle (since they have similar intrinsic cycle periods). After two cycles, they will have drifted further apart. After many cycles, differences in period will be magnified, leading to no temporal relationship of firing.
The key observation utilized here is that firing is fairly synchronized one cycle after onset. The trick is to reset the cells after each cycle so that they start together again. They then fire with temporal differences equal to the differences in their intrinsic periods. This resetting can be provided by an inhibitory input which pushes state variable m down far from threshold (hyperpolarized, as it were). This could be accomplished through an external pacemaker that reset all the cells, thereby imposing an external frequency onto the network. The interesting observation in this network is that pacemaking can also be imposed from within, though an intrinsic connectivity that synchronizes all members to the will of the masses.
Compile the mechanism with nrnivmodl or otherwise, then run python -i net.py Notice how without any synaptic connections in the network, every cell fires at its own period and the output cell is quiet.
Try to understand how it is generating the figures you see, and what options can be easily changed.
Try running:
network.weight = -0.005 h.finitialize(-65) h.continuerun(1000) plot_raster_and_output_mv(network, t, output_v)
Repeat with network.weight = -0.05 and network.weight = -0.5. What happens to the network? What happens to the output cell?
In the network.weight = -0.5 case, you may have noticed that many neurons do not fire. These have periods that are too long -- before they can fire, the population has fired again and reset them. Notice that the period of network firing is longer than the natural periods of the individual cells. This is because the threshold is calculated to provide this period when m starts at 0. However, with the inhibition, m starts negative.
This will destroy synchrony. Increase inhibitory weight; synchrony recovers. This is a consequence of the exponential rise of the state variable. If the interval is short but the time constant long, then the cell will amplify small variations in the amount of inhibition received.
Each Cell instance in network.cells stores its spike times in its _spike_times property as a NEURON Vector. Build a list or dictionary of spike times by cell (convert the Vector objects to a Python list) and save it to a file with indentation. Open the file in a text editor and confirm that it matches your expectations.
Make a version of plot_raster that loads the saved JSON spike time data and plots it. Confirm that it matches the original graph.
Convert it to two lists: one listing every spike time, in order, and one listing the corresponding cell id. Plot this and compare with the original to make sure they are the same. As a check, note that the length of each of the new lists is equal to the sum of the lengths of the per-cell spike time lists.
The readout neuron gives us one way of looking at synchrony. Implement a more direct approach based on the spike times. One possible metric: look at how evenly distributed the set of all spikes are; the closer the spikes from all the cells are to being distributed uniformly, the less synchronous the network.
Use your metric from the previous question.
As implemented, the network uses all-to-all connectivity. Experiment with different connectivity schemes, e.g. connecting according to a given probability (use Random123), connecting to the nearest n cells, etc, and assess how this affects synchrony.
Plot the connectivity. Calculate distribution of synaptic convergence and divergence. Find the set of all pairs of neurons with reciprocal synaptic connections (i.e. A→B and B→A). Highlight the reciprical synapses in your connectivity plots.
This will require modifying the connect method as well as adding a section and the corresponding dynamics. Are you able to do this and maintain the basic synchronization properties we have seen?